00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1914 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3175 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.121 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.258 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.745 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.756 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.766 Checking out Revision ea7646cba2e992b05bb6a53407de7fbcf465b5c6 (FETCH_HEAD) 00:00:06.766 > git config core.sparsecheckout # timeout=10 00:00:06.777 > git read-tree -mu HEAD # timeout=10 00:00:06.791 > git checkout -f ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=5 00:00:06.811 Commit message: "ansible/inventory: Fix GP16's BMC address" 00:00:06.811 > git rev-list --no-walk ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=10 00:00:06.893 [Pipeline] Start of Pipeline 00:00:06.907 [Pipeline] library 00:00:06.909 Loading library shm_lib@master 00:00:06.909 Library shm_lib@master is cached. Copying from home. 00:00:06.926 [Pipeline] node 00:00:06.937 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:00:06.939 [Pipeline] { 00:00:06.952 [Pipeline] catchError 00:00:06.954 [Pipeline] { 00:00:06.968 [Pipeline] wrap 00:00:06.978 [Pipeline] { 00:00:06.987 [Pipeline] stage 00:00:06.989 [Pipeline] { (Prologue) 00:00:07.009 [Pipeline] echo 00:00:07.011 Node: VM-host-SM16 00:00:07.017 [Pipeline] cleanWs 00:00:07.025 [WS-CLEANUP] Deleting project workspace... 00:00:07.025 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.031 [WS-CLEANUP] done 00:00:07.198 [Pipeline] setCustomBuildProperty 00:00:07.248 [Pipeline] nodesByLabel 00:00:07.249 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.257 [Pipeline] httpRequest 00:00:07.260 HttpMethod: GET 00:00:07.261 URL: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:07.262 Sending request to url: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:07.264 Response Code: HTTP/1.1 200 OK 00:00:07.265 Success: Status code 200 is in the accepted range: 200,404 00:00:07.266 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:08.074 [Pipeline] sh 00:00:08.355 + tar --no-same-owner -xf jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:08.372 [Pipeline] httpRequest 00:00:08.376 HttpMethod: GET 00:00:08.377 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:08.377 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:08.378 Response Code: HTTP/1.1 200 OK 00:00:08.379 Success: Status code 200 is in the accepted range: 200,404 00:00:08.380 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:28.714 [Pipeline] sh 00:00:29.039 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:31.579 [Pipeline] sh 00:00:31.857 + git -C spdk log --oneline -n5 00:00:31.857 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:00:31.857 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:00:31.857 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:31.857 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:00:31.857 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:00:31.876 [Pipeline] writeFile 00:00:31.893 [Pipeline] sh 00:00:32.174 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:32.186 [Pipeline] sh 00:00:32.469 + cat autorun-spdk.conf 00:00:32.469 SPDK_TEST_UNITTEST=1 00:00:32.469 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.469 SPDK_TEST_NVME=1 00:00:32.469 SPDK_TEST_BLOCKDEV=1 00:00:32.469 SPDK_RUN_ASAN=1 00:00:32.469 SPDK_RUN_UBSAN=1 00:00:32.469 SPDK_TEST_RAID5=1 00:00:32.469 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.476 RUN_NIGHTLY=1 00:00:32.478 [Pipeline] } 00:00:32.496 [Pipeline] // stage 00:00:32.517 [Pipeline] stage 00:00:32.520 [Pipeline] { (Run VM) 00:00:32.537 [Pipeline] sh 00:00:32.822 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.822 + echo 'Start stage prepare_nvme.sh' 00:00:32.822 Start stage prepare_nvme.sh 00:00:32.822 + [[ -n 3 ]] 00:00:32.822 + disk_prefix=ex3 00:00:32.822 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest_2 ]] 00:00:32.822 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf ]] 00:00:32.822 + source /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf 00:00:32.822 ++ SPDK_TEST_UNITTEST=1 00:00:32.822 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.822 ++ SPDK_TEST_NVME=1 00:00:32.822 ++ SPDK_TEST_BLOCKDEV=1 00:00:32.822 ++ SPDK_RUN_ASAN=1 00:00:32.822 ++ SPDK_RUN_UBSAN=1 00:00:32.822 ++ SPDK_TEST_RAID5=1 00:00:32.822 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.822 ++ RUN_NIGHTLY=1 00:00:32.822 + cd /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:00:32.822 + nvme_files=() 00:00:32.822 + declare -A nvme_files 00:00:32.822 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.822 + nvme_files['nvme.img']=5G 00:00:32.822 + nvme_files['nvme-cmb.img']=5G 00:00:32.822 + nvme_files['nvme-multi0.img']=4G 00:00:32.822 + nvme_files['nvme-multi1.img']=4G 00:00:32.822 + nvme_files['nvme-multi2.img']=4G 00:00:32.822 + nvme_files['nvme-openstack.img']=8G 00:00:32.822 + nvme_files['nvme-zns.img']=5G 00:00:32.822 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.822 + (( SPDK_TEST_FTL == 1 )) 00:00:32.822 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.822 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.822 + for nvme in "${!nvme_files[@]}" 00:00:32.822 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:32.822 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.822 + for nvme in "${!nvme_files[@]}" 00:00:32.822 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:33.390 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.390 + for nvme in "${!nvme_files[@]}" 00:00:33.390 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:33.390 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:33.390 + for nvme in "${!nvme_files[@]}" 00:00:33.390 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:33.390 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.390 + for nvme in "${!nvme_files[@]}" 00:00:33.390 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:33.390 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.390 + for nvme in "${!nvme_files[@]}" 00:00:33.390 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:33.390 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.390 + for nvme in "${!nvme_files[@]}" 00:00:33.390 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:34.326 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.326 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:34.326 + echo 'End stage prepare_nvme.sh' 00:00:34.326 End stage prepare_nvme.sh 00:00:34.338 [Pipeline] sh 00:00:34.618 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:34.618 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -H -a -v -f ubuntu2004 00:00:34.618 00:00:34.618 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/scripts/vagrant 00:00:34.618 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk 00:00:34.618 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest_2 00:00:34.618 HELP=0 00:00:34.618 DRY_RUN=0 00:00:34.618 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img, 00:00:34.618 NVME_DISKS_TYPE=nvme, 00:00:34.618 NVME_AUTO_CREATE=0 00:00:34.618 NVME_DISKS_NAMESPACES=, 00:00:34.618 NVME_CMB=, 00:00:34.618 NVME_PMR=, 00:00:34.618 NVME_ZNS=, 00:00:34.618 NVME_MS=, 00:00:34.618 NVME_FDP=, 00:00:34.618 SPDK_VAGRANT_DISTRO=ubuntu2004 00:00:34.618 SPDK_VAGRANT_VMCPU=10 00:00:34.618 SPDK_VAGRANT_VMRAM=12288 00:00:34.618 SPDK_VAGRANT_PROVIDER=libvirt 00:00:34.618 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:34.618 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:34.618 SPDK_OPENSTACK_NETWORK=0 00:00:34.619 VAGRANT_PACKAGE_BOX=0 00:00:34.619 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:34.619 FORCE_DISTRO=true 00:00:34.619 VAGRANT_BOX_VERSION= 00:00:34.619 EXTRA_VAGRANTFILES= 00:00:34.619 NIC_MODEL=e1000 00:00:34.619 00:00:34.619 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt' 00:00:34.619 /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:00:37.179 Bringing machine 'default' up with 'libvirt' provider... 00:00:37.836 ==> default: Creating image (snapshot of base box volume). 00:00:37.836 ==> default: Creating domain with the following settings... 00:00:37.836 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1718109956_8c453f8f5c7d6e13a733 00:00:37.836 ==> default: -- Domain type: kvm 00:00:37.836 ==> default: -- Cpus: 10 00:00:37.836 ==> default: -- Feature: acpi 00:00:37.836 ==> default: -- Feature: apic 00:00:37.836 ==> default: -- Feature: pae 00:00:37.836 ==> default: -- Memory: 12288M 00:00:37.836 ==> default: -- Memory Backing: hugepages: 00:00:37.836 ==> default: -- Management MAC: 00:00:37.836 ==> default: -- Loader: 00:00:37.836 ==> default: -- Nvram: 00:00:37.836 ==> default: -- Base box: spdk/ubuntu2004 00:00:37.836 ==> default: -- Storage pool: default 00:00:37.836 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1718109956_8c453f8f5c7d6e13a733.img (20G) 00:00:37.836 ==> default: -- Volume Cache: default 00:00:37.836 ==> default: -- Kernel: 00:00:37.836 ==> default: -- Initrd: 00:00:37.836 ==> default: -- Graphics Type: vnc 00:00:37.836 ==> default: -- Graphics Port: -1 00:00:37.836 ==> default: -- Graphics IP: 127.0.0.1 00:00:37.836 ==> default: -- Graphics Password: Not defined 00:00:37.836 ==> default: -- Video Type: cirrus 00:00:37.836 ==> default: -- Video VRAM: 9216 00:00:37.836 ==> default: -- Sound Type: 00:00:37.836 ==> default: -- Keymap: en-us 00:00:37.836 ==> default: -- TPM Path: 00:00:37.836 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:37.836 ==> default: -- Command line args: 00:00:37.836 ==> default: -> value=-device, 00:00:37.836 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:37.836 ==> default: -> value=-drive, 00:00:37.836 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:37.836 ==> default: -> value=-device, 00:00:37.836 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.093 ==> default: Creating shared folders metadata... 00:00:38.093 ==> default: Starting domain. 00:00:39.995 ==> default: Waiting for domain to get an IP address... 00:00:52.195 ==> default: Waiting for SSH to become available... 00:00:54.094 ==> default: Configuring and enabling network interfaces... 00:00:55.996 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:01.268 ==> default: Mounting SSHFS shared folder... 00:01:01.268 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:01.268 ==> default: Checking Mount.. 00:01:03.799 ==> default: Checking Mount.. 00:01:04.057 ==> default: Folder Successfully Mounted! 00:01:04.057 ==> default: Running provisioner: file... 00:01:04.057 default: ~/.gitconfig => .gitconfig 00:01:04.316 00:01:04.316 SUCCESS! 00:01:04.316 00:01:04.316 cd to /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:04.316 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:04.316 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:04.316 00:01:04.327 [Pipeline] } 00:01:04.348 [Pipeline] // stage 00:01:04.357 [Pipeline] dir 00:01:04.357 Running in /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt 00:01:04.359 [Pipeline] { 00:01:04.372 [Pipeline] catchError 00:01:04.374 [Pipeline] { 00:01:04.389 [Pipeline] sh 00:01:04.757 + vagrant ssh-config --host vagrant 00:01:04.757 + sed -ne /^Host/,$p 00:01:04.757 + tee ssh_conf 00:01:08.043 Host vagrant 00:01:08.043 HostName 192.168.121.142 00:01:08.043 User vagrant 00:01:08.043 Port 22 00:01:08.043 UserKnownHostsFile /dev/null 00:01:08.043 StrictHostKeyChecking no 00:01:08.043 PasswordAuthentication no 00:01:08.043 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:01:08.043 IdentitiesOnly yes 00:01:08.043 LogLevel FATAL 00:01:08.043 ForwardAgent yes 00:01:08.043 ForwardX11 yes 00:01:08.043 00:01:08.056 [Pipeline] withEnv 00:01:08.058 [Pipeline] { 00:01:08.073 [Pipeline] sh 00:01:08.352 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:08.352 source /etc/os-release 00:01:08.352 [[ -e /image.version ]] && img=$(< /image.version) 00:01:08.352 # Minimal, systemd-like check. 00:01:08.352 if [[ -e /.dockerenv ]]; then 00:01:08.352 # Clear garbage from the node's name: 00:01:08.352 # agt-er_autotest_547-896 -> autotest_547-896 00:01:08.352 # $HOSTNAME is the actual container id 00:01:08.352 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:08.352 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:08.352 # We can assume this is a mount from a host where container is running, 00:01:08.352 # so fetch its hostname to easily identify the target swarm worker. 00:01:08.352 container="$(< /etc/hostname) ($agent)" 00:01:08.352 else 00:01:08.352 # Fallback 00:01:08.352 container=$agent 00:01:08.352 fi 00:01:08.352 fi 00:01:08.352 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:08.352 00:01:08.930 [Pipeline] } 00:01:08.952 [Pipeline] // withEnv 00:01:08.960 [Pipeline] setCustomBuildProperty 00:01:08.975 [Pipeline] stage 00:01:08.977 [Pipeline] { (Tests) 00:01:08.995 [Pipeline] sh 00:01:09.272 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:09.852 [Pipeline] sh 00:01:10.132 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:10.714 [Pipeline] timeout 00:01:10.714 Timeout set to expire in 1 hr 30 min 00:01:10.716 [Pipeline] { 00:01:10.732 [Pipeline] sh 00:01:11.011 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:11.949 HEAD is now at 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:12.004 [Pipeline] sh 00:01:12.299 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:12.866 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:12.881 [Pipeline] sh 00:01:13.160 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:13.743 [Pipeline] sh 00:01:14.021 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:01:14.589 ++ readlink -f spdk_repo 00:01:14.589 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:14.589 + [[ -n /home/vagrant/spdk_repo ]] 00:01:14.589 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:14.589 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:14.589 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:14.589 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:14.589 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:14.589 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:01:14.589 + cd /home/vagrant/spdk_repo 00:01:14.589 + source /etc/os-release 00:01:14.589 ++ NAME=Ubuntu 00:01:14.589 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:01:14.589 ++ ID=ubuntu 00:01:14.589 ++ ID_LIKE=debian 00:01:14.589 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:01:14.589 ++ VERSION_ID=20.04 00:01:14.589 ++ HOME_URL=https://www.ubuntu.com/ 00:01:14.589 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:14.589 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:14.589 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:14.589 ++ VERSION_CODENAME=focal 00:01:14.589 ++ UBUNTU_CODENAME=focal 00:01:14.589 + uname -a 00:01:14.589 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:14.589 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:14.589 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:14.848 Hugepages 00:01:14.848 node hugesize free / total 00:01:14.848 node0 1048576kB 0 / 0 00:01:14.848 node0 2048kB 0 / 0 00:01:14.848 00:01:14.848 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.848 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:14.848 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:14.848 + rm -f /tmp/spdk-ld-path 00:01:14.848 + source autorun-spdk.conf 00:01:14.848 ++ SPDK_TEST_UNITTEST=1 00:01:14.848 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.848 ++ SPDK_TEST_NVME=1 00:01:14.848 ++ SPDK_TEST_BLOCKDEV=1 00:01:14.848 ++ SPDK_RUN_ASAN=1 00:01:14.848 ++ SPDK_RUN_UBSAN=1 00:01:14.848 ++ SPDK_TEST_RAID5=1 00:01:14.848 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.848 ++ RUN_NIGHTLY=1 00:01:14.848 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.848 + [[ -n '' ]] 00:01:14.848 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:14.848 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:14.848 + for M in /var/spdk/build-*-manifest.txt 00:01:14.848 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.848 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.848 + for M in /var/spdk/build-*-manifest.txt 00:01:14.848 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.848 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.848 ++ uname 00:01:14.848 + [[ Linux == \L\i\n\u\x ]] 00:01:14.848 + sudo dmesg -T 00:01:14.848 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:14.848 + sudo dmesg --clear 00:01:14.848 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:14.848 + dmesg_pid=2350 00:01:14.848 + [[ Ubuntu == FreeBSD ]] 00:01:14.848 + sudo dmesg -Tw 00:01:14.848 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.848 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.848 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.848 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.848 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.848 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.848 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.848 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:14.848 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:14.848 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:14.848 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.848 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:14.848 Test configuration: 00:01:14.848 SPDK_TEST_UNITTEST=1 00:01:14.848 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.848 SPDK_TEST_NVME=1 00:01:14.848 SPDK_TEST_BLOCKDEV=1 00:01:14.848 SPDK_RUN_ASAN=1 00:01:14.848 SPDK_RUN_UBSAN=1 00:01:14.848 SPDK_TEST_RAID5=1 00:01:14.848 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.848 RUN_NIGHTLY=1 12:46:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:14.848 12:46:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.848 12:46:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.848 12:46:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.848 12:46:33 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:14.848 12:46:33 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:14.848 12:46:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:14.848 12:46:33 -- paths/export.sh@5 -- $ export PATH 00:01:14.848 12:46:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:14.848 12:46:33 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:14.848 12:46:33 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:14.848 12:46:33 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718109993.XXXXXX 00:01:14.848 12:46:33 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718109993.8KB4at 00:01:14.848 12:46:33 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:14.848 12:46:33 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:14.848 12:46:33 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:14.848 12:46:33 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:14.848 12:46:33 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.848 12:46:33 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:14.848 12:46:33 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:14.848 12:46:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.107 12:46:34 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:15.107 12:46:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.107 12:46:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.107 12:46:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:15.107 12:46:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.108 Tue Jun 11 12:46:34 UTC 2024 00:01:15.108 12:46:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.108 LTS-43-g130b9406a 00:01:15.108 12:46:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:15.108 12:46:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:15.108 12:46:34 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:15.108 12:46:34 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:15.108 12:46:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.108 ************************************ 00:01:15.108 START TEST asan 00:01:15.108 ************************************ 00:01:15.108 using asan 00:01:15.108 12:46:34 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:15.108 00:01:15.108 real 0m0.000s 00:01:15.108 user 0m0.000s 00:01:15.108 sys 0m0.000s 00:01:15.108 12:46:34 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:15.108 ************************************ 00:01:15.108 12:46:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.108 END TEST asan 00:01:15.108 ************************************ 00:01:15.108 12:46:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.108 12:46:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.108 12:46:34 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:15.108 12:46:34 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:15.108 12:46:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.108 ************************************ 00:01:15.108 START TEST ubsan 00:01:15.108 ************************************ 00:01:15.108 using ubsan 00:01:15.108 12:46:34 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:15.108 00:01:15.108 real 0m0.000s 00:01:15.108 user 0m0.000s 00:01:15.108 sys 0m0.000s 00:01:15.108 12:46:34 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:15.108 ************************************ 00:01:15.108 END TEST ubsan 00:01:15.108 12:46:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.108 ************************************ 00:01:15.108 12:46:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.108 12:46:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.108 12:46:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.108 12:46:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.108 12:46:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.108 12:46:34 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:15.108 12:46:34 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:15.108 12:46:34 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:15.108 12:46:34 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:15.108 12:46:34 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:15.108 12:46:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.108 ************************************ 00:01:15.108 START TEST unittest_build 00:01:15.108 ************************************ 00:01:15.108 12:46:34 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:15.108 12:46:34 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:15.108 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:15.108 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:15.675 Using 'verbs' RDMA provider 00:01:30.809 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:43.012 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:43.012 Creating mk/config.mk...done. 00:01:43.012 Creating mk/cc.flags.mk...done. 00:01:43.012 Type 'make' to build. 00:01:43.012 12:47:01 -- common/autobuild_common.sh@403 -- $ make -j10 00:01:43.012 make[1]: Nothing to be done for 'all'. 00:01:44.911 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.170 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.170 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.170 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.170 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.429 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.429 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.429 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.429 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.429 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.429 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.429 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.688 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.688 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.688 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.688 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.688 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.688 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.688 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.688 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.947 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.947 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.947 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.947 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.947 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.947 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:45.947 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.205 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.723 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.723 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.723 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.723 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.723 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.723 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:46.991 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.263 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.522 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.522 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.522 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.522 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.522 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.522 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.780 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.780 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.780 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.780 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:47.780 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.038 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.038 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.038 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.038 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.038 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.038 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.296 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.296 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.554 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:48.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.070 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.070 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.070 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.070 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.070 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.070 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.071 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.071 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.329 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.587 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.846 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.846 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.846 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.847 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:49.847 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.105 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.105 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.105 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.105 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.364 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.364 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.364 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.364 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.364 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.364 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.364 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.623 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.881 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.881 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.881 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.881 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.881 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.881 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.881 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.881 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.882 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:50.882 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.140 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.140 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.141 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.141 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.141 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.141 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.141 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.141 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.141 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.399 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.399 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.399 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.399 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.399 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.658 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.658 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.658 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.658 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.658 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.917 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.917 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.917 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.917 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:51.917 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.176 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.176 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.176 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.176 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.176 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.176 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.435 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.693 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.952 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.952 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.952 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.952 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:52.952 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.210 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.210 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.210 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.210 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.210 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.472 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.472 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.472 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.472 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.472 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.730 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.730 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.730 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.730 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.989 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.989 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.989 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.989 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.989 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.989 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:53.989 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.505 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.505 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.505 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:54.505 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.072 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.072 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.072 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.330 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.330 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.897 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.897 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.897 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:55.897 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.464 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.722 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.722 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:56.980 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.240 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.240 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.240 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.240 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.240 The Meson build system 00:01:57.240 Version: 1.4.0 00:01:57.240 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:57.240 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:57.240 Build type: native build 00:01:57.240 Program cat found: YES (/usr/bin/cat) 00:01:57.240 Project name: DPDK 00:01:57.240 Project version: 23.11.0 00:01:57.240 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:01:57.240 C linker for the host machine: cc ld.bfd 2.34 00:01:57.240 Host machine cpu family: x86_64 00:01:57.240 Host machine cpu: x86_64 00:01:57.240 Message: ## Building in Developer Mode ## 00:01:57.240 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.240 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.240 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.240 Program python3 found: YES (/usr/bin/python3) 00:01:57.240 Program cat found: YES (/usr/bin/cat) 00:01:57.240 Compiler for C supports arguments -march=native: YES 00:01:57.240 Checking for size of "void *" : 8 00:01:57.240 Checking for size of "void *" : 8 (cached) 00:01:57.240 Library m found: YES 00:01:57.240 Library numa found: YES 00:01:57.240 Has header "numaif.h" : YES 00:01:57.240 Library fdt found: NO 00:01:57.240 Library execinfo found: NO 00:01:57.240 Has header "execinfo.h" : YES 00:01:57.240 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:01:57.240 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.240 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.240 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.240 Run-time dependency openssl found: YES 1.1.1f 00:01:57.240 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:57.240 Library pcap found: NO 00:01:57.240 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.240 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.240 Compiler for C supports arguments -Wformat: YES 00:01:57.240 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:57.240 Compiler for C supports arguments -Wformat-security: YES 00:01:57.240 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.240 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.240 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.240 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.240 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.240 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.240 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.240 Compiler for C supports arguments -Wundef: YES 00:01:57.240 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.240 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.240 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.240 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.240 Program objdump found: YES (/usr/bin/objdump) 00:01:57.240 Compiler for C supports arguments -mavx512f: YES 00:01:57.240 Checking if "AVX512 checking" compiles: YES 00:01:57.240 Fetching value of define "__SSE4_2__" : 1 00:01:57.240 Fetching value of define "__AES__" : 1 00:01:57.240 Fetching value of define "__AVX__" : 1 00:01:57.240 Fetching value of define "__AVX2__" : 1 00:01:57.240 Fetching value of define "__AVX512BW__" : (undefined) 00:01:57.240 Fetching value of define "__AVX512CD__" : (undefined) 00:01:57.240 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:57.240 Fetching value of define "__AVX512F__" : (undefined) 00:01:57.240 Fetching value of define "__AVX512VL__" : (undefined) 00:01:57.240 Fetching value of define "__PCLMUL__" : 1 00:01:57.240 Fetching value of define "__RDRND__" : 1 00:01:57.240 Fetching value of define "__RDSEED__" : 1 00:01:57.240 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.240 Fetching value of define "__znver1__" : (undefined) 00:01:57.240 Fetching value of define "__znver2__" : (undefined) 00:01:57.240 Fetching value of define "__znver3__" : (undefined) 00:01:57.240 Fetching value of define "__znver4__" : (undefined) 00:01:57.240 Library asan found: YES 00:01:57.240 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.240 Message: lib/log: Defining dependency "log" 00:01:57.240 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.240 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.240 Library rt found: YES 00:01:57.240 Checking for function "getentropy" : NO 00:01:57.240 Message: lib/eal: Defining dependency "eal" 00:01:57.240 Message: lib/ring: Defining dependency "ring" 00:01:57.240 Message: lib/rcu: Defining dependency "rcu" 00:01:57.240 Message: lib/mempool: Defining dependency "mempool" 00:01:57.240 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.240 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.240 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.240 Compiler for C supports arguments -mpclmul: YES 00:01:57.240 Compiler for C supports arguments -maes: YES 00:01:57.240 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.240 Compiler for C supports arguments -mavx512bw: YES 00:01:57.240 Compiler for C supports arguments -mavx512dq: YES 00:01:57.240 Compiler for C supports arguments -mavx512vl: YES 00:01:57.240 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.240 Compiler for C supports arguments -mavx2: YES 00:01:57.240 Compiler for C supports arguments -mavx: YES 00:01:57.240 Message: lib/net: Defining dependency "net" 00:01:57.240 Message: lib/meter: Defining dependency "meter" 00:01:57.240 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.240 Message: lib/pci: Defining dependency "pci" 00:01:57.240 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.240 Message: lib/hash: Defining dependency "hash" 00:01:57.240 Message: lib/timer: Defining dependency "timer" 00:01:57.240 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.240 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.240 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.240 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.240 Message: lib/power: Defining dependency "power" 00:01:57.240 Message: lib/reorder: Defining dependency "reorder" 00:01:57.240 Message: lib/security: Defining dependency "security" 00:01:57.240 Has header "linux/userfaultfd.h" : YES 00:01:57.240 Has header "linux/vduse.h" : NO 00:01:57.240 Message: lib/vhost: Defining dependency "vhost" 00:01:57.240 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.240 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.240 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.240 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.240 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.240 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.240 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.240 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.240 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.240 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.240 Program doxygen found: YES (/usr/bin/doxygen) 00:01:57.240 Configuring doxy-api-html.conf using configuration 00:01:57.240 Configuring doxy-api-man.conf using configuration 00:01:57.240 Program mandb found: YES (/usr/bin/mandb) 00:01:57.240 Program sphinx-build found: NO 00:01:57.240 Configuring rte_build_config.h using configuration 00:01:57.240 Message: 00:01:57.240 ================= 00:01:57.240 Applications Enabled 00:01:57.240 ================= 00:01:57.240 00:01:57.240 apps: 00:01:57.240 00:01:57.240 00:01:57.240 Message: 00:01:57.240 ================= 00:01:57.240 Libraries Enabled 00:01:57.240 ================= 00:01:57.240 00:01:57.240 libs: 00:01:57.240 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.240 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.240 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.240 00:01:57.240 Message: 00:01:57.241 =============== 00:01:57.241 Drivers Enabled 00:01:57.241 =============== 00:01:57.241 00:01:57.241 common: 00:01:57.241 00:01:57.241 bus: 00:01:57.241 pci, vdev, 00:01:57.241 mempool: 00:01:57.241 ring, 00:01:57.241 dma: 00:01:57.241 00:01:57.241 net: 00:01:57.241 00:01:57.241 crypto: 00:01:57.241 00:01:57.241 compress: 00:01:57.241 00:01:57.241 vdpa: 00:01:57.241 00:01:57.241 00:01:57.241 Message: 00:01:57.241 ================= 00:01:57.241 Content Skipped 00:01:57.241 ================= 00:01:57.241 00:01:57.241 apps: 00:01:57.241 dumpcap: explicitly disabled via build config 00:01:57.241 graph: explicitly disabled via build config 00:01:57.241 pdump: explicitly disabled via build config 00:01:57.241 proc-info: explicitly disabled via build config 00:01:57.241 test-acl: explicitly disabled via build config 00:01:57.241 test-bbdev: explicitly disabled via build config 00:01:57.241 test-cmdline: explicitly disabled via build config 00:01:57.241 test-compress-perf: explicitly disabled via build config 00:01:57.241 test-crypto-perf: explicitly disabled via build config 00:01:57.241 test-dma-perf: explicitly disabled via build config 00:01:57.241 test-eventdev: explicitly disabled via build config 00:01:57.241 test-fib: explicitly disabled via build config 00:01:57.241 test-flow-perf: explicitly disabled via build config 00:01:57.241 test-gpudev: explicitly disabled via build config 00:01:57.241 test-mldev: explicitly disabled via build config 00:01:57.241 test-pipeline: explicitly disabled via build config 00:01:57.241 test-pmd: explicitly disabled via build config 00:01:57.241 test-regex: explicitly disabled via build config 00:01:57.241 test-sad: explicitly disabled via build config 00:01:57.241 test-security-perf: explicitly disabled via build config 00:01:57.241 00:01:57.241 libs: 00:01:57.241 metrics: explicitly disabled via build config 00:01:57.241 acl: explicitly disabled via build config 00:01:57.241 bbdev: explicitly disabled via build config 00:01:57.241 bitratestats: explicitly disabled via build config 00:01:57.241 bpf: explicitly disabled via build config 00:01:57.241 cfgfile: explicitly disabled via build config 00:01:57.241 distributor: explicitly disabled via build config 00:01:57.241 efd: explicitly disabled via build config 00:01:57.241 eventdev: explicitly disabled via build config 00:01:57.241 dispatcher: explicitly disabled via build config 00:01:57.241 gpudev: explicitly disabled via build config 00:01:57.241 gro: explicitly disabled via build config 00:01:57.241 gso: explicitly disabled via build config 00:01:57.241 ip_frag: explicitly disabled via build config 00:01:57.241 jobstats: explicitly disabled via build config 00:01:57.241 latencystats: explicitly disabled via build config 00:01:57.241 lpm: explicitly disabled via build config 00:01:57.241 member: explicitly disabled via build config 00:01:57.241 pcapng: explicitly disabled via build config 00:01:57.241 rawdev: explicitly disabled via build config 00:01:57.241 regexdev: explicitly disabled via build config 00:01:57.241 mldev: explicitly disabled via build config 00:01:57.241 rib: explicitly disabled via build config 00:01:57.241 sched: explicitly disabled via build config 00:01:57.241 stack: explicitly disabled via build config 00:01:57.241 ipsec: explicitly disabled via build config 00:01:57.241 pdcp: explicitly disabled via build config 00:01:57.241 fib: explicitly disabled via build config 00:01:57.241 port: explicitly disabled via build config 00:01:57.241 pdump: explicitly disabled via build config 00:01:57.241 table: explicitly disabled via build config 00:01:57.241 pipeline: explicitly disabled via build config 00:01:57.241 graph: explicitly disabled via build config 00:01:57.241 node: explicitly disabled via build config 00:01:57.241 00:01:57.241 drivers: 00:01:57.241 common/cpt: not in enabled drivers build config 00:01:57.241 common/dpaax: not in enabled drivers build config 00:01:57.241 common/iavf: not in enabled drivers build config 00:01:57.241 common/idpf: not in enabled drivers build config 00:01:57.241 common/mvep: not in enabled drivers build config 00:01:57.241 common/octeontx: not in enabled drivers build config 00:01:57.241 bus/auxiliary: not in enabled drivers build config 00:01:57.241 bus/cdx: not in enabled drivers build config 00:01:57.241 bus/dpaa: not in enabled drivers build config 00:01:57.241 bus/fslmc: not in enabled drivers build config 00:01:57.241 bus/ifpga: not in enabled drivers build config 00:01:57.241 bus/platform: not in enabled drivers build config 00:01:57.241 bus/vmbus: not in enabled drivers build config 00:01:57.241 common/cnxk: not in enabled drivers build config 00:01:57.241 common/mlx5: not in enabled drivers build config 00:01:57.241 common/nfp: not in enabled drivers build config 00:01:57.241 common/qat: not in enabled drivers build config 00:01:57.241 common/sfc_efx: not in enabled drivers build config 00:01:57.241 mempool/bucket: not in enabled drivers build config 00:01:57.241 mempool/cnxk: not in enabled drivers build config 00:01:57.241 mempool/dpaa: not in enabled drivers build config 00:01:57.241 mempool/dpaa2: not in enabled drivers build config 00:01:57.241 mempool/octeontx: not in enabled drivers build config 00:01:57.241 mempool/stack: not in enabled drivers build config 00:01:57.241 dma/cnxk: not in enabled drivers build config 00:01:57.241 dma/dpaa: not in enabled drivers build config 00:01:57.241 dma/dpaa2: not in enabled drivers build config 00:01:57.241 dma/hisilicon: not in enabled drivers build config 00:01:57.241 dma/idxd: not in enabled drivers build config 00:01:57.241 dma/ioat: not in enabled drivers build config 00:01:57.241 dma/skeleton: not in enabled drivers build config 00:01:57.241 net/af_packet: not in enabled drivers build config 00:01:57.241 net/af_xdp: not in enabled drivers build config 00:01:57.241 net/ark: not in enabled drivers build config 00:01:57.241 net/atlantic: not in enabled drivers build config 00:01:57.241 net/avp: not in enabled drivers build config 00:01:57.241 net/axgbe: not in enabled drivers build config 00:01:57.241 net/bnx2x: not in enabled drivers build config 00:01:57.241 net/bnxt: not in enabled drivers build config 00:01:57.241 net/bonding: not in enabled drivers build config 00:01:57.241 net/cnxk: not in enabled drivers build config 00:01:57.241 net/cpfl: not in enabled drivers build config 00:01:57.241 net/cxgbe: not in enabled drivers build config 00:01:57.241 net/dpaa: not in enabled drivers build config 00:01:57.241 net/dpaa2: not in enabled drivers build config 00:01:57.241 net/e1000: not in enabled drivers build config 00:01:57.241 net/ena: not in enabled drivers build config 00:01:57.241 net/enetc: not in enabled drivers build config 00:01:57.241 net/enetfec: not in enabled drivers build config 00:01:57.241 net/enic: not in enabled drivers build config 00:01:57.241 net/failsafe: not in enabled drivers build config 00:01:57.241 net/fm10k: not in enabled drivers build config 00:01:57.241 net/gve: not in enabled drivers build config 00:01:57.241 net/hinic: not in enabled drivers build config 00:01:57.241 net/hns3: not in enabled drivers build config 00:01:57.241 net/i40e: not in enabled drivers build config 00:01:57.241 net/iavf: not in enabled drivers build config 00:01:57.241 net/ice: not in enabled drivers build config 00:01:57.241 net/idpf: not in enabled drivers build config 00:01:57.241 net/igc: not in enabled drivers build config 00:01:57.241 net/ionic: not in enabled drivers build config 00:01:57.241 net/ipn3ke: not in enabled drivers build config 00:01:57.241 net/ixgbe: not in enabled drivers build config 00:01:57.241 net/mana: not in enabled drivers build config 00:01:57.241 net/memif: not in enabled drivers build config 00:01:57.241 net/mlx4: not in enabled drivers build config 00:01:57.241 net/mlx5: not in enabled drivers build config 00:01:57.241 net/mvneta: not in enabled drivers build config 00:01:57.241 net/mvpp2: not in enabled drivers build config 00:01:57.241 net/netvsc: not in enabled drivers build config 00:01:57.241 net/nfb: not in enabled drivers build config 00:01:57.241 net/nfp: not in enabled drivers build config 00:01:57.241 net/ngbe: not in enabled drivers build config 00:01:57.241 net/null: not in enabled drivers build config 00:01:57.241 net/octeontx: not in enabled drivers build config 00:01:57.241 net/octeon_ep: not in enabled drivers build config 00:01:57.241 net/pcap: not in enabled drivers build config 00:01:57.241 net/pfe: not in enabled drivers build config 00:01:57.241 net/qede: not in enabled drivers build config 00:01:57.241 net/ring: not in enabled drivers build config 00:01:57.241 net/sfc: not in enabled drivers build config 00:01:57.241 net/softnic: not in enabled drivers build config 00:01:57.241 net/tap: not in enabled drivers build config 00:01:57.241 net/thunderx: not in enabled drivers build config 00:01:57.241 net/txgbe: not in enabled drivers build config 00:01:57.241 net/vdev_netvsc: not in enabled drivers build config 00:01:57.241 net/vhost: not in enabled drivers build config 00:01:57.241 net/virtio: not in enabled drivers build config 00:01:57.241 net/vmxnet3: not in enabled drivers build config 00:01:57.241 raw/*: missing internal dependency, "rawdev" 00:01:57.241 crypto/armv8: not in enabled drivers build config 00:01:57.241 crypto/bcmfs: not in enabled drivers build config 00:01:57.241 crypto/caam_jr: not in enabled drivers build config 00:01:57.241 crypto/ccp: not in enabled drivers build config 00:01:57.241 crypto/cnxk: not in enabled drivers build config 00:01:57.241 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.241 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.241 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.241 crypto/mlx5: not in enabled drivers build config 00:01:57.241 crypto/mvsam: not in enabled drivers build config 00:01:57.241 crypto/nitrox: not in enabled drivers build config 00:01:57.241 crypto/null: not in enabled drivers build config 00:01:57.241 crypto/octeontx: not in enabled drivers build config 00:01:57.241 crypto/openssl: not in enabled drivers build config 00:01:57.241 crypto/scheduler: not in enabled drivers build config 00:01:57.241 crypto/uadk: not in enabled drivers build config 00:01:57.241 crypto/virtio: not in enabled drivers build config 00:01:57.241 compress/isal: not in enabled drivers build config 00:01:57.241 compress/mlx5: not in enabled drivers build config 00:01:57.241 compress/octeontx: not in enabled drivers build config 00:01:57.241 compress/zlib: not in enabled drivers build config 00:01:57.241 regex/*: missing internal dependency, "regexdev" 00:01:57.242 ml/*: missing internal dependency, "mldev" 00:01:57.242 vdpa/ifc: not in enabled drivers build config 00:01:57.242 vdpa/mlx5: not in enabled drivers build config 00:01:57.242 vdpa/nfp: not in enabled drivers build config 00:01:57.242 vdpa/sfc: not in enabled drivers build config 00:01:57.242 event/*: missing internal dependency, "eventdev" 00:01:57.242 baseband/*: missing internal dependency, "bbdev" 00:01:57.242 gpu/*: missing internal dependency, "gpudev" 00:01:57.242 00:01:57.242 00:01:57.242 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.501 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.501 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.501 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.501 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.759 Build targets in project: 85 00:01:57.759 00:01:57.759 DPDK 23.11.0 00:01:57.759 00:01:57.759 User defined options 00:01:57.759 buildtype : debug 00:01:57.759 default_library : static 00:01:57.759 libdir : lib 00:01:57.759 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:57.759 b_sanitize : address 00:01:57.759 c_args : -fPIC -Werror 00:01:57.759 c_link_args : 00:01:57.759 cpu_instruction_set: native 00:01:57.759 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:01:57.759 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:01:57.759 enable_docs : false 00:01:57.759 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:57.759 enable_kmods : false 00:01:57.759 tests : false 00:01:57.759 00:01:57.759 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.759 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.759 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:57.759 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:58.018 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:58.018 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:58.276 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:58.276 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.276 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.276 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:58.276 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:58.276 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.276 [4/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.276 [5/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.276 [6/264] Linking static target lib/librte_kvargs.a 00:01:58.276 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.276 [8/264] Linking static target lib/librte_log.a 00:01:58.276 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.535 [10/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.535 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.535 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.535 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.535 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.535 [15/264] Linking static target lib/librte_telemetry.a 00:01:58.535 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:58.793 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.793 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.793 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.793 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.793 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:58.793 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.051 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.051 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:59.051 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:59.051 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:59.051 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.311 [25/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.311 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:59.311 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:59.311 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.311 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:59.311 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:59.311 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.311 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:59.311 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:59.569 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:59.569 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:59.569 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.569 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:59.569 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:59.569 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:59.569 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:59.569 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:59.569 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:59.569 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:59.569 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:59.569 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:59.827 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:59.827 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:59.827 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.827 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:01:59.827 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:59.827 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.086 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.086 [46/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.086 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.086 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:00.086 [48/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.086 [49/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.086 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.086 [51/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.086 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.086 [53/264] Linking target lib/librte_log.so.24.0 00:02:00.086 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.086 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.086 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.345 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.345 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:00.345 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:00.345 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.345 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.345 [60/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:00.345 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.345 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.345 [63/264] Linking target lib/librte_kvargs.so.24.0 00:02:00.345 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.345 [65/264] Linking target lib/librte_telemetry.so.24.0 00:02:00.345 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:00.345 [66/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:00.604 [67/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:00.604 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.604 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.604 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.604 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.604 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.604 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.604 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.604 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.604 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.604 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.863 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.863 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.863 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.863 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.863 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.863 [83/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.863 [84/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.863 [85/264] Linking static target lib/librte_ring.a 00:02:00.863 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.121 [87/264] Linking static target lib/librte_eal.a 00:02:01.121 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.121 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.121 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.121 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.121 [92/264] Linking static target lib/librte_mempool.a 00:02:01.121 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.121 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.379 [95/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.379 [96/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.379 [97/264] Linking static target lib/librte_rcu.a 00:02:01.379 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.379 [99/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.637 [100/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.637 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.637 [102/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.637 [103/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.637 [104/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.637 [105/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.637 [106/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.896 [107/264] Linking static target lib/librte_net.a 00:02:01.896 [108/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.896 [109/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:01.896 [110/264] Linking static target lib/librte_meter.a 00:02:01.896 [111/264] Linking static target lib/librte_mbuf.a 00:02:01.896 [112/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.896 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.896 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.896 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:02.153 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.153 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:02.153 [118/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.412 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:02.412 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:02.412 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:02.412 [122/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.412 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:02.669 [124/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:02.669 [125/264] Linking static target lib/librte_pci.a 00:02:02.670 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.670 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:02.670 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:02.927 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:02.927 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:02.927 [131/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.927 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:02.927 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:02.927 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:02.927 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:02.927 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:02.927 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:02.927 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:02.927 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.927 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:02.927 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:02.927 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.184 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.184 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.184 [145/264] Linking static target lib/librte_cmdline.a 00:02:03.184 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.474 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.474 [148/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.474 [149/264] Linking static target lib/librte_timer.a 00:02:03.474 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.474 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.474 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.732 [153/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:03.732 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.732 [155/264] Linking static target lib/librte_compressdev.a 00:02:03.732 [156/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.732 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:03.990 [158/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.990 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.990 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.990 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:03.990 [162/264] Linking static target lib/librte_hash.a 00:02:03.990 [163/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.990 [164/264] Linking static target lib/librte_dmadev.a 00:02:04.247 [165/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:04.247 [166/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.247 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:04.247 [168/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.247 [169/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:04.247 [170/264] Linking static target lib/librte_ethdev.a 00:02:04.247 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.247 [172/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:04.505 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.505 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:04.505 [175/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:04.505 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:04.505 [177/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.505 [178/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.763 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:04.763 [180/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:04.763 [181/264] Linking static target lib/librte_power.a 00:02:05.020 [182/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.020 [183/264] Linking static target lib/librte_cryptodev.a 00:02:05.020 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:05.020 [185/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:05.020 [186/264] Linking static target lib/librte_reorder.a 00:02:05.020 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:05.020 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:05.278 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.278 [190/264] Linking static target lib/librte_security.a 00:02:05.278 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.537 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.537 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.537 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.537 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.795 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.795 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.795 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.795 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:06.054 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:06.054 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:06.054 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:06.312 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:06.312 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:06.312 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:06.312 [206/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:06.312 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.312 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:06.570 [209/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.570 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.570 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:06.570 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:06.570 [213/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.570 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.570 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:06.570 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:06.570 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:06.570 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.827 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:06.827 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.827 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.827 [222/264] Linking static target drivers/librte_mempool_ring.a 00:02:07.085 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.460 [224/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.460 [225/264] Linking target lib/librte_eal.so.24.0 00:02:08.460 [226/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.460 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:08.725 [228/264] Linking target lib/librte_ring.so.24.0 00:02:08.725 [229/264] Linking target lib/librte_meter.so.24.0 00:02:08.725 [230/264] Linking target lib/librte_timer.so.24.0 00:02:08.725 [231/264] Linking target lib/librte_pci.so.24.0 00:02:08.725 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:08.725 [233/264] Linking target lib/librte_dmadev.so.24.0 00:02:08.725 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:08.725 [235/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:08.725 [236/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:08.725 [237/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:08.725 [238/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:08.725 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:08.725 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:08.725 [241/264] Linking target lib/librte_mempool.so.24.0 00:02:08.984 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:08.984 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:08.984 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:08.984 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:09.242 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:09.242 [247/264] Linking target lib/librte_reorder.so.24.0 00:02:09.242 [248/264] Linking target lib/librte_net.so.24.0 00:02:09.242 [249/264] Linking target lib/librte_compressdev.so.24.0 00:02:09.242 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:02:09.242 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:09.242 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:09.242 [253/264] Linking target lib/librte_hash.so.24.0 00:02:09.242 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:09.242 [255/264] Linking target lib/librte_security.so.24.0 00:02:09.500 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:10.066 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.066 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:10.324 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:10.324 [260/264] Linking target lib/librte_power.so.24.0 00:02:12.226 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:12.485 [262/264] Linking static target lib/librte_vhost.a 00:02:14.387 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.387 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:14.387 INFO: autodetecting backend as ninja 00:02:14.387 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:15.324 CC lib/log/log.o 00:02:15.324 CC lib/log/log_deprecated.o 00:02:15.324 CC lib/ut_mock/mock.o 00:02:15.324 CC lib/log/log_flags.o 00:02:15.324 CC lib/ut/ut.o 00:02:15.324 LIB libspdk_ut_mock.a 00:02:15.324 LIB libspdk_ut.a 00:02:15.324 LIB libspdk_log.a 00:02:15.324 CC lib/util/cpuset.o 00:02:15.324 CC lib/util/bit_array.o 00:02:15.324 CC lib/util/base64.o 00:02:15.324 CC lib/util/crc16.o 00:02:15.324 CC lib/util/crc32.o 00:02:15.324 CC lib/util/crc32c.o 00:02:15.324 CC lib/dma/dma.o 00:02:15.324 CC lib/ioat/ioat.o 00:02:15.324 CXX lib/trace_parser/trace.o 00:02:15.582 CC lib/vfio_user/host/vfio_user_pci.o 00:02:15.582 CC lib/vfio_user/host/vfio_user.o 00:02:15.582 CC lib/util/crc32_ieee.o 00:02:15.582 CC lib/util/crc64.o 00:02:15.582 CC lib/util/dif.o 00:02:15.582 CC lib/util/fd.o 00:02:15.582 LIB libspdk_dma.a 00:02:15.582 CC lib/util/file.o 00:02:15.582 CC lib/util/hexlify.o 00:02:15.839 CC lib/util/iov.o 00:02:15.839 CC lib/util/math.o 00:02:15.839 CC lib/util/pipe.o 00:02:15.839 CC lib/util/strerror_tls.o 00:02:15.839 LIB libspdk_ioat.a 00:02:15.839 CC lib/util/string.o 00:02:15.839 LIB libspdk_vfio_user.a 00:02:15.839 CC lib/util/uuid.o 00:02:15.839 CC lib/util/fd_group.o 00:02:15.839 CC lib/util/xor.o 00:02:15.839 CC lib/util/zipf.o 00:02:16.405 LIB libspdk_util.a 00:02:16.405 CC lib/json/json_parse.o 00:02:16.405 CC lib/json/json_util.o 00:02:16.405 CC lib/json/json_write.o 00:02:16.405 CC lib/vmd/vmd.o 00:02:16.405 CC lib/vmd/led.o 00:02:16.405 CC lib/conf/conf.o 00:02:16.405 CC lib/env_dpdk/env.o 00:02:16.405 CC lib/idxd/idxd.o 00:02:16.663 CC lib/rdma/common.o 00:02:16.663 LIB libspdk_trace_parser.a 00:02:16.663 CC lib/rdma/rdma_verbs.o 00:02:16.663 CC lib/idxd/idxd_user.o 00:02:16.663 LIB libspdk_conf.a 00:02:16.663 CC lib/env_dpdk/memory.o 00:02:16.663 CC lib/env_dpdk/pci.o 00:02:16.663 CC lib/env_dpdk/init.o 00:02:16.921 LIB libspdk_json.a 00:02:16.921 CC lib/env_dpdk/threads.o 00:02:16.921 LIB libspdk_rdma.a 00:02:16.921 CC lib/env_dpdk/pci_ioat.o 00:02:16.921 CC lib/env_dpdk/pci_virtio.o 00:02:16.921 CC lib/env_dpdk/pci_vmd.o 00:02:17.179 CC lib/env_dpdk/pci_idxd.o 00:02:17.179 CC lib/env_dpdk/pci_event.o 00:02:17.179 CC lib/jsonrpc/jsonrpc_server.o 00:02:17.179 CC lib/env_dpdk/sigbus_handler.o 00:02:17.179 CC lib/env_dpdk/pci_dpdk.o 00:02:17.179 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:17.179 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:17.179 CC lib/jsonrpc/jsonrpc_client.o 00:02:17.179 LIB libspdk_idxd.a 00:02:17.179 LIB libspdk_vmd.a 00:02:17.179 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:17.438 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:17.697 LIB libspdk_jsonrpc.a 00:02:17.697 CC lib/rpc/rpc.o 00:02:17.956 LIB libspdk_rpc.a 00:02:17.956 CC lib/sock/sock_rpc.o 00:02:17.956 CC lib/sock/sock.o 00:02:17.956 CC lib/notify/notify.o 00:02:17.956 CC lib/trace/trace.o 00:02:17.956 CC lib/notify/notify_rpc.o 00:02:17.956 CC lib/trace/trace_rpc.o 00:02:17.956 CC lib/trace/trace_flags.o 00:02:18.214 LIB libspdk_env_dpdk.a 00:02:18.214 LIB libspdk_notify.a 00:02:18.214 LIB libspdk_trace.a 00:02:18.473 CC lib/thread/thread.o 00:02:18.473 CC lib/thread/iobuf.o 00:02:18.473 LIB libspdk_sock.a 00:02:18.731 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.731 CC lib/nvme/nvme_ctrlr.o 00:02:18.731 CC lib/nvme/nvme_fabric.o 00:02:18.731 CC lib/nvme/nvme_pcie.o 00:02:18.731 CC lib/nvme/nvme_ns_cmd.o 00:02:18.731 CC lib/nvme/nvme_ns.o 00:02:18.731 CC lib/nvme/nvme_pcie_common.o 00:02:18.731 CC lib/nvme/nvme_qpair.o 00:02:18.731 CC lib/nvme/nvme.o 00:02:19.298 CC lib/nvme/nvme_quirks.o 00:02:19.298 CC lib/nvme/nvme_transport.o 00:02:19.298 CC lib/nvme/nvme_discovery.o 00:02:19.298 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:19.557 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:19.557 CC lib/nvme/nvme_tcp.o 00:02:19.557 CC lib/nvme/nvme_opal.o 00:02:19.557 CC lib/nvme/nvme_io_msg.o 00:02:19.557 CC lib/nvme/nvme_poll_group.o 00:02:19.815 CC lib/nvme/nvme_zns.o 00:02:19.815 CC lib/nvme/nvme_cuse.o 00:02:20.074 CC lib/nvme/nvme_vfio_user.o 00:02:20.074 CC lib/nvme/nvme_rdma.o 00:02:20.333 LIB libspdk_thread.a 00:02:20.333 CC lib/init/json_config.o 00:02:20.333 CC lib/accel/accel.o 00:02:20.333 CC lib/init/subsystem.o 00:02:20.333 CC lib/blob/blobstore.o 00:02:20.333 CC lib/virtio/virtio.o 00:02:20.333 CC lib/virtio/virtio_vhost_user.o 00:02:20.592 CC lib/virtio/virtio_vfio_user.o 00:02:20.592 CC lib/init/subsystem_rpc.o 00:02:20.592 CC lib/virtio/virtio_pci.o 00:02:20.851 CC lib/init/rpc.o 00:02:20.851 CC lib/blob/request.o 00:02:20.851 CC lib/blob/zeroes.o 00:02:20.851 CC lib/blob/blob_bs_dev.o 00:02:20.851 LIB libspdk_init.a 00:02:20.851 CC lib/accel/accel_rpc.o 00:02:20.851 LIB libspdk_virtio.a 00:02:21.109 CC lib/accel/accel_sw.o 00:02:21.109 CC lib/event/app.o 00:02:21.109 CC lib/event/reactor.o 00:02:21.109 CC lib/event/log_rpc.o 00:02:21.109 CC lib/event/app_rpc.o 00:02:21.109 CC lib/event/scheduler_static.o 00:02:21.367 LIB libspdk_nvme.a 00:02:21.625 LIB libspdk_event.a 00:02:21.625 LIB libspdk_accel.a 00:02:21.625 CC lib/bdev/bdev.o 00:02:21.625 CC lib/bdev/bdev_rpc.o 00:02:21.625 CC lib/bdev/bdev_zone.o 00:02:21.625 CC lib/bdev/part.o 00:02:21.625 CC lib/bdev/scsi_nvme.o 00:02:24.211 LIB libspdk_blob.a 00:02:24.211 CC lib/lvol/lvol.o 00:02:24.211 CC lib/blobfs/blobfs.o 00:02:24.211 CC lib/blobfs/tree.o 00:02:24.470 LIB libspdk_bdev.a 00:02:24.729 CC lib/nvmf/ctrlr.o 00:02:24.729 CC lib/nvmf/ctrlr_bdev.o 00:02:24.729 CC lib/nvmf/ctrlr_discovery.o 00:02:24.729 CC lib/nvmf/subsystem.o 00:02:24.729 CC lib/nvmf/nvmf.o 00:02:24.729 CC lib/scsi/dev.o 00:02:24.729 CC lib/nbd/nbd.o 00:02:24.729 CC lib/ftl/ftl_core.o 00:02:24.729 LIB libspdk_blobfs.a 00:02:24.729 CC lib/ftl/ftl_init.o 00:02:24.988 LIB libspdk_lvol.a 00:02:24.988 CC lib/scsi/lun.o 00:02:24.988 CC lib/ftl/ftl_layout.o 00:02:24.988 CC lib/scsi/port.o 00:02:25.247 CC lib/scsi/scsi.o 00:02:25.247 CC lib/scsi/scsi_bdev.o 00:02:25.247 CC lib/nbd/nbd_rpc.o 00:02:25.247 CC lib/ftl/ftl_debug.o 00:02:25.247 CC lib/ftl/ftl_io.o 00:02:25.247 CC lib/ftl/ftl_sb.o 00:02:25.247 CC lib/ftl/ftl_l2p.o 00:02:25.506 LIB libspdk_nbd.a 00:02:25.506 CC lib/ftl/ftl_l2p_flat.o 00:02:25.506 CC lib/ftl/ftl_nv_cache.o 00:02:25.506 CC lib/ftl/ftl_band.o 00:02:25.506 CC lib/ftl/ftl_band_ops.o 00:02:25.506 CC lib/ftl/ftl_writer.o 00:02:25.506 CC lib/nvmf/nvmf_rpc.o 00:02:25.765 CC lib/scsi/scsi_pr.o 00:02:25.765 CC lib/scsi/scsi_rpc.o 00:02:25.765 CC lib/scsi/task.o 00:02:25.765 CC lib/ftl/ftl_rq.o 00:02:26.023 CC lib/ftl/ftl_reloc.o 00:02:26.023 CC lib/ftl/ftl_l2p_cache.o 00:02:26.023 CC lib/ftl/ftl_p2l.o 00:02:26.023 CC lib/ftl/mngt/ftl_mngt.o 00:02:26.023 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:26.023 LIB libspdk_scsi.a 00:02:26.023 CC lib/nvmf/transport.o 00:02:26.023 CC lib/nvmf/tcp.o 00:02:26.282 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:26.282 CC lib/nvmf/rdma.o 00:02:26.542 CC lib/iscsi/conn.o 00:02:26.542 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:26.542 CC lib/vhost/vhost.o 00:02:26.542 CC lib/vhost/vhost_rpc.o 00:02:26.542 CC lib/vhost/vhost_scsi.o 00:02:26.542 CC lib/vhost/vhost_blk.o 00:02:26.542 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:26.801 CC lib/vhost/rte_vhost_user.o 00:02:26.801 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:27.060 CC lib/iscsi/init_grp.o 00:02:27.060 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:27.060 CC lib/iscsi/iscsi.o 00:02:27.060 CC lib/iscsi/md5.o 00:02:27.060 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:27.319 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:27.319 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:27.319 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:27.319 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:27.319 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:27.319 CC lib/ftl/utils/ftl_conf.o 00:02:27.579 CC lib/iscsi/param.o 00:02:27.579 CC lib/iscsi/portal_grp.o 00:02:27.579 CC lib/iscsi/tgt_node.o 00:02:27.579 CC lib/iscsi/iscsi_subsystem.o 00:02:27.579 CC lib/ftl/utils/ftl_md.o 00:02:27.838 LIB libspdk_vhost.a 00:02:27.838 CC lib/ftl/utils/ftl_mempool.o 00:02:27.838 CC lib/ftl/utils/ftl_bitmap.o 00:02:27.838 CC lib/ftl/utils/ftl_property.o 00:02:27.838 CC lib/iscsi/iscsi_rpc.o 00:02:28.097 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:28.097 CC lib/iscsi/task.o 00:02:28.097 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:28.097 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:28.097 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:28.097 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:28.097 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.097 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.357 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.357 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.357 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.357 CC lib/ftl/base/ftl_base_dev.o 00:02:28.357 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.357 CC lib/ftl/ftl_trace.o 00:02:28.616 LIB libspdk_ftl.a 00:02:28.616 LIB libspdk_iscsi.a 00:02:28.876 LIB libspdk_nvmf.a 00:02:29.134 CC module/env_dpdk/env_dpdk_rpc.o 00:02:29.134 CC module/blob/bdev/blob_bdev.o 00:02:29.134 CC module/accel/iaa/accel_iaa.o 00:02:29.134 CC module/accel/error/accel_error.o 00:02:29.134 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:29.134 CC module/sock/posix/posix.o 00:02:29.135 CC module/accel/ioat/accel_ioat.o 00:02:29.135 CC module/scheduler/gscheduler/gscheduler.o 00:02:29.135 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:29.135 CC module/accel/dsa/accel_dsa.o 00:02:29.393 LIB libspdk_env_dpdk_rpc.a 00:02:29.393 LIB libspdk_scheduler_gscheduler.a 00:02:29.393 CC module/accel/dsa/accel_dsa_rpc.o 00:02:29.393 LIB libspdk_scheduler_dpdk_governor.a 00:02:29.393 CC module/accel/ioat/accel_ioat_rpc.o 00:02:29.393 CC module/accel/error/accel_error_rpc.o 00:02:29.393 CC module/accel/iaa/accel_iaa_rpc.o 00:02:29.393 LIB libspdk_scheduler_dynamic.a 00:02:29.393 LIB libspdk_accel_dsa.a 00:02:29.393 LIB libspdk_accel_error.a 00:02:29.393 LIB libspdk_blob_bdev.a 00:02:29.393 LIB libspdk_accel_iaa.a 00:02:29.393 LIB libspdk_accel_ioat.a 00:02:29.653 CC module/bdev/malloc/bdev_malloc.o 00:02:29.653 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.653 CC module/bdev/nvme/bdev_nvme.o 00:02:29.653 CC module/bdev/gpt/gpt.o 00:02:29.653 CC module/bdev/delay/vbdev_delay.o 00:02:29.653 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.653 CC module/bdev/error/vbdev_error.o 00:02:29.653 CC module/bdev/null/bdev_null.o 00:02:29.653 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.912 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.912 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.912 CC module/bdev/null/bdev_null_rpc.o 00:02:29.912 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.912 LIB libspdk_sock_posix.a 00:02:29.912 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.912 LIB libspdk_blobfs_bdev.a 00:02:29.912 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:30.171 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:30.171 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:30.171 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:30.171 LIB libspdk_bdev_null.a 00:02:30.171 LIB libspdk_bdev_passthru.a 00:02:30.171 LIB libspdk_bdev_error.a 00:02:30.171 LIB libspdk_bdev_gpt.a 00:02:30.171 CC module/bdev/nvme/nvme_rpc.o 00:02:30.171 CC module/bdev/nvme/bdev_mdns_client.o 00:02:30.171 CC module/bdev/nvme/vbdev_opal.o 00:02:30.171 LIB libspdk_bdev_delay.a 00:02:30.171 CC module/bdev/raid/bdev_raid.o 00:02:30.171 LIB libspdk_bdev_malloc.a 00:02:30.171 CC module/bdev/raid/bdev_raid_rpc.o 00:02:30.171 CC module/bdev/split/vbdev_split.o 00:02:30.429 CC module/bdev/raid/bdev_raid_sb.o 00:02:30.429 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:30.429 LIB libspdk_bdev_lvol.a 00:02:30.429 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:30.429 CC module/bdev/raid/raid0.o 00:02:30.429 CC module/bdev/raid/raid1.o 00:02:30.429 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:30.429 CC module/bdev/split/vbdev_split_rpc.o 00:02:30.687 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:30.687 CC module/bdev/raid/concat.o 00:02:30.687 CC module/bdev/raid/raid5f.o 00:02:30.687 LIB libspdk_bdev_split.a 00:02:30.687 LIB libspdk_bdev_zone_block.a 00:02:30.945 CC module/bdev/aio/bdev_aio.o 00:02:30.945 CC module/bdev/aio/bdev_aio_rpc.o 00:02:30.945 CC module/bdev/ftl/bdev_ftl.o 00:02:30.945 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:30.945 CC module/bdev/iscsi/bdev_iscsi.o 00:02:30.945 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:30.945 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:30.945 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:30.945 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.205 LIB libspdk_bdev_ftl.a 00:02:31.205 LIB libspdk_bdev_aio.a 00:02:31.205 LIB libspdk_bdev_iscsi.a 00:02:31.463 LIB libspdk_bdev_raid.a 00:02:31.463 LIB libspdk_bdev_virtio.a 00:02:32.029 LIB libspdk_bdev_nvme.a 00:02:32.287 CC module/event/subsystems/vmd/vmd.o 00:02:32.287 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:32.287 CC module/event/subsystems/iobuf/iobuf.o 00:02:32.287 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:32.287 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:32.287 CC module/event/subsystems/scheduler/scheduler.o 00:02:32.287 CC module/event/subsystems/sock/sock.o 00:02:32.546 LIB libspdk_event_scheduler.a 00:02:32.546 LIB libspdk_event_vhost_blk.a 00:02:32.546 LIB libspdk_event_vmd.a 00:02:32.546 LIB libspdk_event_sock.a 00:02:32.546 LIB libspdk_event_iobuf.a 00:02:32.804 CC module/event/subsystems/accel/accel.o 00:02:32.804 LIB libspdk_event_accel.a 00:02:33.062 CC module/event/subsystems/bdev/bdev.o 00:02:33.320 LIB libspdk_event_bdev.a 00:02:33.320 CC module/event/subsystems/scsi/scsi.o 00:02:33.320 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:33.320 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:33.320 CC module/event/subsystems/nbd/nbd.o 00:02:33.579 LIB libspdk_event_nbd.a 00:02:33.579 LIB libspdk_event_scsi.a 00:02:33.579 LIB libspdk_event_nvmf.a 00:02:33.579 CC module/event/subsystems/iscsi/iscsi.o 00:02:33.579 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:33.836 LIB libspdk_event_vhost_scsi.a 00:02:33.836 LIB libspdk_event_iscsi.a 00:02:33.837 CC app/trace_record/trace_record.o 00:02:34.094 CXX app/trace/trace.o 00:02:34.094 CC examples/accel/perf/accel_perf.o 00:02:34.094 CC examples/ioat/perf/perf.o 00:02:34.094 CC examples/blob/hello_world/hello_blob.o 00:02:34.094 CC examples/bdev/hello_world/hello_bdev.o 00:02:34.094 CC test/accel/dif/dif.o 00:02:34.094 CC test/app/bdev_svc/bdev_svc.o 00:02:34.094 CC test/bdev/bdevio/bdevio.o 00:02:34.094 CC test/blobfs/mkfs/mkfs.o 00:02:34.353 LINK bdev_svc 00:02:34.353 LINK spdk_trace_record 00:02:34.353 LINK hello_blob 00:02:34.353 LINK mkfs 00:02:34.353 LINK hello_bdev 00:02:34.353 LINK ioat_perf 00:02:34.612 LINK spdk_trace 00:02:34.612 LINK bdevio 00:02:34.612 LINK accel_perf 00:02:34.612 LINK dif 00:02:34.870 CC examples/blob/cli/blobcli.o 00:02:34.870 CC examples/ioat/verify/verify.o 00:02:34.870 CC app/nvmf_tgt/nvmf_main.o 00:02:35.128 LINK nvmf_tgt 00:02:35.128 LINK verify 00:02:35.386 LINK blobcli 00:02:35.643 CC app/iscsi_tgt/iscsi_tgt.o 00:02:35.901 LINK iscsi_tgt 00:02:35.901 CC app/spdk_tgt/spdk_tgt.o 00:02:36.160 LINK spdk_tgt 00:02:37.096 CC app/spdk_lspci/spdk_lspci.o 00:02:37.096 CC app/spdk_nvme_perf/perf.o 00:02:37.096 LINK spdk_lspci 00:02:37.096 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:37.355 CC examples/bdev/bdevperf/bdevperf.o 00:02:37.355 CC app/spdk_nvme_identify/identify.o 00:02:37.355 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.614 LINK spdk_nvme_discover 00:02:37.614 LINK nvme_fuzz 00:02:38.181 LINK spdk_nvme_perf 00:02:38.181 CC app/spdk_top/spdk_top.o 00:02:38.181 LINK bdevperf 00:02:38.181 LINK spdk_nvme_identify 00:02:38.181 CC app/spdk_dd/spdk_dd.o 00:02:38.181 CC app/vhost/vhost.o 00:02:38.440 LINK vhost 00:02:38.714 CC app/fio/nvme/fio_plugin.o 00:02:38.714 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:38.714 LINK spdk_dd 00:02:38.977 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:38.977 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:38.977 TEST_HEADER include/spdk/accel_module.h 00:02:38.977 TEST_HEADER include/spdk/bit_pool.h 00:02:38.977 TEST_HEADER include/spdk/ioat.h 00:02:38.977 TEST_HEADER include/spdk/blobfs.h 00:02:38.977 TEST_HEADER include/spdk/notify.h 00:02:38.977 TEST_HEADER include/spdk/pipe.h 00:02:38.977 TEST_HEADER include/spdk/accel.h 00:02:38.977 TEST_HEADER include/spdk/file.h 00:02:38.977 TEST_HEADER include/spdk/version.h 00:02:38.977 TEST_HEADER include/spdk/trace_parser.h 00:02:38.977 TEST_HEADER include/spdk/opal_spec.h 00:02:38.977 TEST_HEADER include/spdk/uuid.h 00:02:38.977 TEST_HEADER include/spdk/likely.h 00:02:38.977 TEST_HEADER include/spdk/dif.h 00:02:38.977 TEST_HEADER include/spdk/memory.h 00:02:38.977 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:38.977 TEST_HEADER include/spdk/dma.h 00:02:38.977 TEST_HEADER include/spdk/nbd.h 00:02:38.977 TEST_HEADER include/spdk/conf.h 00:02:38.977 TEST_HEADER include/spdk/env_dpdk.h 00:02:38.977 TEST_HEADER include/spdk/nvmf_spec.h 00:02:38.977 TEST_HEADER include/spdk/iscsi_spec.h 00:02:38.977 TEST_HEADER include/spdk/mmio.h 00:02:38.977 TEST_HEADER include/spdk/json.h 00:02:38.977 TEST_HEADER include/spdk/opal.h 00:02:38.977 TEST_HEADER include/spdk/bdev.h 00:02:38.977 TEST_HEADER include/spdk/base64.h 00:02:38.977 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:38.977 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:38.977 LINK spdk_top 00:02:38.977 TEST_HEADER include/spdk/fd.h 00:02:38.977 TEST_HEADER include/spdk/barrier.h 00:02:38.977 TEST_HEADER include/spdk/scsi_spec.h 00:02:38.977 TEST_HEADER include/spdk/zipf.h 00:02:38.977 TEST_HEADER include/spdk/nvmf.h 00:02:39.235 TEST_HEADER include/spdk/queue.h 00:02:39.235 TEST_HEADER include/spdk/xor.h 00:02:39.235 TEST_HEADER include/spdk/cpuset.h 00:02:39.235 TEST_HEADER include/spdk/thread.h 00:02:39.235 TEST_HEADER include/spdk/bdev_zone.h 00:02:39.235 TEST_HEADER include/spdk/fd_group.h 00:02:39.235 TEST_HEADER include/spdk/tree.h 00:02:39.235 TEST_HEADER include/spdk/blob_bdev.h 00:02:39.235 TEST_HEADER include/spdk/crc64.h 00:02:39.235 TEST_HEADER include/spdk/assert.h 00:02:39.235 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.235 TEST_HEADER include/spdk/endian.h 00:02:39.235 TEST_HEADER include/spdk/pci_ids.h 00:02:39.235 TEST_HEADER include/spdk/log.h 00:02:39.235 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.235 TEST_HEADER include/spdk/ftl.h 00:02:39.235 TEST_HEADER include/spdk/config.h 00:02:39.235 TEST_HEADER include/spdk/vhost.h 00:02:39.235 TEST_HEADER include/spdk/bdev_module.h 00:02:39.235 TEST_HEADER include/spdk/nvme_intel.h 00:02:39.235 TEST_HEADER include/spdk/idxd_spec.h 00:02:39.235 TEST_HEADER include/spdk/crc16.h 00:02:39.235 TEST_HEADER include/spdk/nvme.h 00:02:39.235 TEST_HEADER include/spdk/stdinc.h 00:02:39.235 TEST_HEADER include/spdk/scsi.h 00:02:39.235 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.235 TEST_HEADER include/spdk/idxd.h 00:02:39.235 TEST_HEADER include/spdk/hexlify.h 00:02:39.235 TEST_HEADER include/spdk/reduce.h 00:02:39.235 TEST_HEADER include/spdk/crc32.h 00:02:39.235 TEST_HEADER include/spdk/init.h 00:02:39.235 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.235 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.235 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.235 TEST_HEADER include/spdk/util.h 00:02:39.235 TEST_HEADER include/spdk/jsonrpc.h 00:02:39.235 TEST_HEADER include/spdk/env.h 00:02:39.235 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.235 TEST_HEADER include/spdk/lvol.h 00:02:39.235 TEST_HEADER include/spdk/histogram_data.h 00:02:39.235 TEST_HEADER include/spdk/event.h 00:02:39.235 TEST_HEADER include/spdk/trace.h 00:02:39.235 TEST_HEADER include/spdk/ioat_spec.h 00:02:39.235 TEST_HEADER include/spdk/string.h 00:02:39.235 TEST_HEADER include/spdk/ublk.h 00:02:39.235 TEST_HEADER include/spdk/bit_array.h 00:02:39.235 TEST_HEADER include/spdk/scheduler.h 00:02:39.235 TEST_HEADER include/spdk/blob.h 00:02:39.235 TEST_HEADER include/spdk/gpt_spec.h 00:02:39.235 TEST_HEADER include/spdk/sock.h 00:02:39.235 TEST_HEADER include/spdk/vmd.h 00:02:39.235 TEST_HEADER include/spdk/rpc.h 00:02:39.235 CXX test/cpp_headers/accel_module.o 00:02:39.494 CXX test/cpp_headers/bit_pool.o 00:02:39.494 LINK spdk_nvme 00:02:39.494 LINK vhost_fuzz 00:02:39.494 CC test/dma/test_dma/test_dma.o 00:02:39.752 CC test/env/vtophys/vtophys.o 00:02:39.752 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.752 CXX test/cpp_headers/ioat.o 00:02:39.752 LINK vtophys 00:02:40.011 CXX test/cpp_headers/blobfs.o 00:02:40.011 LINK test_dma 00:02:40.011 CXX test/cpp_headers/notify.o 00:02:40.011 LINK mem_callbacks 00:02:40.270 CXX test/cpp_headers/pipe.o 00:02:40.270 CXX test/cpp_headers/accel.o 00:02:40.529 CC test/app/histogram_perf/histogram_perf.o 00:02:40.529 CXX test/cpp_headers/file.o 00:02:40.529 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.529 CC test/event/event_perf/event_perf.o 00:02:40.529 LINK histogram_perf 00:02:40.529 CC app/fio/bdev/fio_plugin.o 00:02:40.787 CXX test/cpp_headers/version.o 00:02:40.787 LINK iscsi_fuzz 00:02:40.787 LINK event_perf 00:02:40.787 CXX test/cpp_headers/trace_parser.o 00:02:40.787 LINK env_dpdk_post_init 00:02:41.046 CC examples/nvme/hello_world/hello_world.o 00:02:41.046 CXX test/cpp_headers/opal_spec.o 00:02:41.046 LINK hello_world 00:02:41.303 CXX test/cpp_headers/uuid.o 00:02:41.303 LINK spdk_bdev 00:02:41.303 CXX test/cpp_headers/likely.o 00:02:41.303 CC examples/nvme/reconnect/reconnect.o 00:02:41.303 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.303 CC examples/nvme/arbitration/arbitration.o 00:02:41.303 CXX test/cpp_headers/dif.o 00:02:41.562 CC test/event/reactor/reactor.o 00:02:41.820 LINK reactor 00:02:41.820 CXX test/cpp_headers/memory.o 00:02:41.820 LINK reconnect 00:02:41.820 LINK arbitration 00:02:41.820 CC test/env/memory/memory_ut.o 00:02:41.820 CC test/app/jsoncat/jsoncat.o 00:02:42.079 CXX test/cpp_headers/vfio_user_pci.o 00:02:42.079 LINK nvme_manage 00:02:42.079 LINK jsoncat 00:02:42.079 CXX test/cpp_headers/dma.o 00:02:42.079 CXX test/cpp_headers/nbd.o 00:02:42.079 CC test/env/pci/pci_ut.o 00:02:42.338 CXX test/cpp_headers/conf.o 00:02:42.597 CXX test/cpp_headers/env_dpdk.o 00:02:42.597 CC test/event/reactor_perf/reactor_perf.o 00:02:42.597 CXX test/cpp_headers/nvmf_spec.o 00:02:42.597 LINK pci_ut 00:02:42.597 LINK memory_ut 00:02:42.597 LINK reactor_perf 00:02:42.856 CC test/app/stub/stub.o 00:02:42.856 CXX test/cpp_headers/iscsi_spec.o 00:02:42.856 CXX test/cpp_headers/mmio.o 00:02:42.856 CC test/lvol/esnap/esnap.o 00:02:42.856 LINK stub 00:02:43.113 CXX test/cpp_headers/json.o 00:02:43.113 CC test/rpc_client/rpc_client_test.o 00:02:43.113 CC examples/nvme/hotplug/hotplug.o 00:02:43.113 CC test/nvme/aer/aer.o 00:02:43.113 CC test/thread/poller_perf/poller_perf.o 00:02:43.370 CXX test/cpp_headers/opal.o 00:02:43.370 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:43.370 LINK rpc_client_test 00:02:43.370 LINK poller_perf 00:02:43.370 LINK hotplug 00:02:43.628 CC test/event/app_repeat/app_repeat.o 00:02:43.628 LINK aer 00:02:43.628 CXX test/cpp_headers/bdev.o 00:02:43.628 LINK histogram_ut 00:02:43.628 LINK app_repeat 00:02:43.886 CXX test/cpp_headers/base64.o 00:02:43.886 CXX test/cpp_headers/blobfs_bdev.o 00:02:44.144 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:44.144 CC examples/sock/hello_world/hello_sock.o 00:02:44.144 CXX test/cpp_headers/nvme_ocssd.o 00:02:44.144 CC test/thread/lock/spdk_lock.o 00:02:44.144 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:44.402 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:44.402 CXX test/cpp_headers/fd.o 00:02:44.402 LINK hello_sock 00:02:44.402 CXX test/cpp_headers/barrier.o 00:02:44.660 CC test/nvme/reset/reset.o 00:02:44.660 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:44.660 CXX test/cpp_headers/scsi_spec.o 00:02:44.660 CC test/event/scheduler/scheduler.o 00:02:44.918 LINK cmb_copy 00:02:44.918 CXX test/cpp_headers/zipf.o 00:02:44.918 LINK blob_bdev_ut 00:02:44.918 LINK reset 00:02:44.918 CXX test/cpp_headers/nvmf.o 00:02:44.918 LINK scheduler 00:02:45.177 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:45.177 CXX test/cpp_headers/queue.o 00:02:45.177 CC examples/vmd/lsvmd/lsvmd.o 00:02:45.177 CXX test/cpp_headers/xor.o 00:02:45.435 LINK lsvmd 00:02:45.435 CXX test/cpp_headers/cpuset.o 00:02:45.693 CXX test/cpp_headers/thread.o 00:02:45.953 CXX test/cpp_headers/bdev_zone.o 00:02:45.953 CC examples/nvme/abort/abort.o 00:02:45.953 CC test/nvme/sgl/sgl.o 00:02:45.953 CXX test/cpp_headers/fd_group.o 00:02:45.953 LINK spdk_lock 00:02:46.211 CXX test/cpp_headers/tree.o 00:02:46.211 LINK sgl 00:02:46.211 LINK abort 00:02:46.211 CXX test/cpp_headers/blob_bdev.o 00:02:46.469 CC examples/vmd/led/led.o 00:02:46.469 CXX test/cpp_headers/crc64.o 00:02:46.469 LINK led 00:02:46.728 CXX test/cpp_headers/assert.o 00:02:46.728 LINK accel_ut 00:02:46.728 CXX test/cpp_headers/nvme_spec.o 00:02:46.986 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:46.986 CXX test/cpp_headers/endian.o 00:02:46.986 LINK pmr_persistence 00:02:46.986 CXX test/cpp_headers/pci_ids.o 00:02:46.986 CXX test/cpp_headers/log.o 00:02:47.245 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.245 CC test/nvme/e2edp/nvme_dp.o 00:02:47.504 CC examples/util/zipf/zipf.o 00:02:47.504 CC examples/nvmf/nvmf/nvmf.o 00:02:47.504 CXX test/cpp_headers/ftl.o 00:02:47.504 CXX test/cpp_headers/config.o 00:02:47.504 LINK zipf 00:02:47.504 LINK nvme_dp 00:02:47.773 CXX test/cpp_headers/vhost.o 00:02:47.773 CC examples/thread/thread/thread_ex.o 00:02:47.773 LINK nvmf 00:02:47.773 CC examples/idxd/perf/perf.o 00:02:47.773 CXX test/cpp_headers/bdev_module.o 00:02:48.036 CXX test/cpp_headers/nvme_intel.o 00:02:48.036 LINK thread 00:02:48.036 CXX test/cpp_headers/idxd_spec.o 00:02:48.036 CXX test/cpp_headers/crc16.o 00:02:48.296 LINK idxd_perf 00:02:48.296 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:48.296 CXX test/cpp_headers/nvme.o 00:02:48.296 CXX test/cpp_headers/stdinc.o 00:02:48.296 LINK interrupt_tgt 00:02:48.554 CXX test/cpp_headers/scsi.o 00:02:48.554 LINK esnap 00:02:48.554 CC test/unit/lib/bdev/part.c/part_ut.o 00:02:48.554 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:48.554 CC test/nvme/overhead/overhead.o 00:02:48.812 CXX test/cpp_headers/idxd.o 00:02:48.812 CXX test/cpp_headers/hexlify.o 00:02:49.071 LINK overhead 00:02:49.071 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:49.071 CXX test/cpp_headers/reduce.o 00:02:49.071 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:49.330 LINK tree_ut 00:02:49.330 CXX test/cpp_headers/crc32.o 00:02:49.330 CXX test/cpp_headers/init.o 00:02:49.588 CXX test/cpp_headers/nvmf_transport.o 00:02:49.588 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:49.588 CXX test/cpp_headers/nvme_zns.o 00:02:49.846 CC test/nvme/err_injection/err_injection.o 00:02:49.846 LINK bdev_ut 00:02:49.846 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.846 LINK dma_ut 00:02:50.105 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:02:50.363 LINK err_injection 00:02:50.363 CXX test/cpp_headers/util.o 00:02:50.363 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:02:50.363 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:02:50.363 CXX test/cpp_headers/jsonrpc.o 00:02:50.622 CXX test/cpp_headers/env.o 00:02:50.622 LINK blobfs_bdev_ut 00:02:50.622 LINK scsi_nvme_ut 00:02:50.622 LINK blobfs_async_ut 00:02:50.938 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:02:50.938 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:02:50.938 CXX test/cpp_headers/nvmf_cmd.o 00:02:50.938 CC test/unit/lib/event/app.c/app_ut.o 00:02:50.938 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:51.196 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:02:51.196 CXX test/cpp_headers/lvol.o 00:02:51.520 CC test/nvme/startup/startup.o 00:02:51.520 LINK gpt_ut 00:02:51.520 CXX test/cpp_headers/histogram_data.o 00:02:51.520 LINK blobfs_sync_ut 00:02:51.520 LINK ioat_ut 00:02:51.520 LINK startup 00:02:51.520 CXX test/cpp_headers/event.o 00:02:51.520 CXX test/cpp_headers/trace.o 00:02:51.778 CXX test/cpp_headers/ioat_spec.o 00:02:51.778 LINK app_ut 00:02:51.778 CXX test/cpp_headers/string.o 00:02:51.778 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:02:51.778 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:02:52.036 CC test/unit/lib/iscsi/param.c/param_ut.o 00:02:52.036 CXX test/cpp_headers/ublk.o 00:02:52.036 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:02:52.036 LINK vbdev_lvol_ut 00:02:52.294 CXX test/cpp_headers/bit_array.o 00:02:52.294 LINK init_grp_ut 00:02:52.294 LINK conn_ut 00:02:52.294 CXX test/cpp_headers/scheduler.o 00:02:52.294 LINK part_ut 00:02:52.551 CXX test/cpp_headers/blob.o 00:02:52.551 LINK param_ut 00:02:52.551 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:52.551 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:52.551 CXX test/cpp_headers/gpt_spec.o 00:02:52.551 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:52.551 CC test/nvme/reserve/reserve.o 00:02:52.551 CXX test/cpp_headers/sock.o 00:02:52.809 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:02:52.809 CXX test/cpp_headers/vmd.o 00:02:52.809 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:02:52.809 LINK reserve 00:02:53.068 LINK reactor_ut 00:02:53.068 LINK blob_ut 00:02:53.068 CXX test/cpp_headers/rpc.o 00:02:53.068 LINK json_util_ut 00:02:53.326 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:02:53.326 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:02:53.326 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:02:53.326 LINK json_write_ut 00:02:53.326 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:02:53.584 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:02:53.841 LINK bdev_raid_sb_ut 00:02:53.841 LINK bdev_zone_ut 00:02:53.841 LINK concat_ut 00:02:53.841 LINK raid1_ut 00:02:54.099 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:02:54.099 CC test/nvme/simple_copy/simple_copy.o 00:02:54.099 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:02:54.357 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:02:54.357 CC test/nvme/connect_stress/connect_stress.o 00:02:54.357 LINK simple_copy 00:02:54.357 LINK connect_stress 00:02:54.357 LINK vbdev_zone_block_ut 00:02:54.357 LINK iscsi_ut 00:02:54.615 LINK portal_grp_ut 00:02:54.873 CC test/nvme/boot_partition/boot_partition.o 00:02:54.873 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:02:54.873 CC test/nvme/compliance/nvme_compliance.o 00:02:54.873 LINK boot_partition 00:02:55.131 LINK bdev_raid_ut 00:02:55.131 LINK json_parse_ut 00:02:55.390 LINK raid5f_ut 00:02:55.390 LINK nvme_compliance 00:02:55.390 CC test/nvme/fused_ordering/fused_ordering.o 00:02:55.390 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:55.648 CC test/nvme/fdp/fdp.o 00:02:55.648 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:55.648 LINK fused_ordering 00:02:55.648 LINK tgt_node_ut 00:02:55.648 LINK doorbell_aers 00:02:55.648 CC test/unit/lib/log/log.c/log_ut.o 00:02:55.905 LINK fdp 00:02:55.905 LINK jsonrpc_server_ut 00:02:55.905 LINK log_ut 00:02:55.905 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:02:56.163 CC test/nvme/cuse/cuse.o 00:02:56.163 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:56.163 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:56.421 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:56.678 LINK notify_ut 00:02:56.678 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:56.678 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:56.678 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:56.936 LINK bdev_ut 00:02:56.936 LINK cuse 00:02:56.936 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:57.194 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:02:57.194 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:02:57.452 LINK nvme_ut 00:02:57.452 LINK nvme_ns_ut 00:02:57.709 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:57.709 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:02:57.967 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:02:57.967 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:02:57.967 LINK lvol_ut 00:02:57.967 LINK nvme_ctrlr_cmd_ut 00:02:58.240 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:02:58.240 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:02:58.816 LINK nvme_quirks_ut 00:02:58.816 LINK nvme_poll_group_ut 00:02:58.816 LINK nvme_ns_cmd_ut 00:02:58.816 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:02:59.073 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:02:59.073 LINK bdev_nvme_ut 00:02:59.073 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:02:59.331 LINK nvme_qpair_ut 00:02:59.331 LINK nvme_ns_ocssd_cmd_ut 00:02:59.331 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:02:59.590 LINK nvme_pcie_ut 00:02:59.590 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:02:59.590 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:02:59.848 LINK nvme_ctrlr_ut 00:02:59.848 LINK nvme_io_msg_ut 00:02:59.848 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:02:59.848 LINK nvme_transport_ut 00:02:59.848 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:00.106 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:00.106 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:00.106 LINK scsi_ut 00:03:00.106 LINK dev_ut 00:03:00.363 LINK nvme_fabric_ut 00:03:00.363 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:00.363 LINK ctrlr_ut 00:03:00.363 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:00.621 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:00.621 LINK lun_ut 00:03:00.878 LINK base64_ut 00:03:00.878 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:00.878 LINK nvme_pcie_common_ut 00:03:00.878 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:00.878 LINK pci_event_ut 00:03:00.878 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:01.136 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:01.136 LINK iobuf_ut 00:03:01.136 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:01.394 LINK subsystem_ut 00:03:01.394 LINK bit_array_ut 00:03:01.394 LINK cpuset_ut 00:03:01.394 LINK nvme_tcp_ut 00:03:01.394 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:01.651 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:01.651 LINK sock_ut 00:03:01.651 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:01.651 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:01.651 LINK tcp_ut 00:03:01.909 LINK crc16_ut 00:03:01.909 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:01.909 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:01.909 LINK scsi_bdev_ut 00:03:01.909 LINK nvme_opal_ut 00:03:01.909 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:02.167 LINK rpc_ut 00:03:02.167 LINK crc32_ieee_ut 00:03:02.167 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:02.167 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:02.167 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:02.426 LINK idxd_user_ut 00:03:02.426 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:02.426 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:02.426 LINK crc32c_ut 00:03:02.684 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:02.684 LINK thread_ut 00:03:02.684 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:02.684 LINK scsi_pr_ut 00:03:02.942 LINK posix_ut 00:03:02.942 LINK crc64_ut 00:03:02.942 LINK nvme_cuse_ut 00:03:03.200 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:03.200 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:03.200 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:03.200 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:03.200 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:03.456 LINK idxd_ut 00:03:03.456 LINK nvme_rdma_ut 00:03:03.456 LINK ftl_l2p_ut 00:03:03.714 LINK common_ut 00:03:03.714 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:03.714 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:03.714 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:03.714 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:03.972 LINK ftl_bitmap_ut 00:03:04.230 LINK ctrlr_discovery_ut 00:03:04.230 LINK ftl_mempool_ut 00:03:04.230 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:04.230 LINK ftl_io_ut 00:03:04.230 LINK ctrlr_bdev_ut 00:03:04.230 LINK vhost_ut 00:03:04.488 LINK subsystem_ut 00:03:04.488 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:04.488 LINK dif_ut 00:03:04.488 LINK ftl_mngt_ut 00:03:04.488 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:04.488 LINK ftl_band_ut 00:03:04.488 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:04.488 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:04.747 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:04.747 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:04.747 CC test/unit/lib/util/math.c/math_ut.o 00:03:04.747 CC test/unit/lib/util/string.c/string_ut.o 00:03:04.747 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:05.005 LINK math_ut 00:03:05.005 LINK iov_ut 00:03:05.005 LINK string_ut 00:03:05.005 LINK xor_ut 00:03:05.263 LINK pipe_ut 00:03:05.520 LINK ftl_sb_ut 00:03:05.778 LINK ftl_layout_upgrade_ut 00:03:05.778 LINK nvmf_ut 00:03:08.307 LINK transport_ut 00:03:08.307 LINK rdma_ut 00:03:08.566 00:03:08.566 real 1m53.241s 00:03:08.566 user 9m37.559s 00:03:08.566 sys 1m42.243s 00:03:08.566 12:48:27 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:08.566 12:48:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.566 ************************************ 00:03:08.566 END TEST unittest_build 00:03:08.566 ************************************ 00:03:08.566 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:08.825 12:48:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:08.825 12:48:27 -- nvmf/common.sh@7 -- # uname -s 00:03:08.825 12:48:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:08.825 12:48:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:08.825 12:48:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:08.825 12:48:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:08.825 12:48:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:08.825 12:48:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:08.825 12:48:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:08.825 12:48:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:08.825 12:48:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:08.825 12:48:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:08.825 12:48:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dfa24b05-b4b7-469e-93f4-496b726a39b7 00:03:08.825 12:48:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=dfa24b05-b4b7-469e-93f4-496b726a39b7 00:03:08.825 12:48:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:08.825 12:48:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:08.825 12:48:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:08.825 12:48:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:08.825 12:48:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:08.825 12:48:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:08.825 12:48:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:08.825 12:48:27 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:08.825 12:48:27 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:08.825 12:48:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:08.825 12:48:27 -- paths/export.sh@5 -- # export PATH 00:03:08.826 12:48:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:08.826 12:48:27 -- nvmf/common.sh@46 -- # : 0 00:03:08.826 12:48:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:08.826 12:48:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:08.826 12:48:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:08.826 12:48:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:08.826 12:48:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:08.826 12:48:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:08.826 12:48:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:08.826 12:48:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:08.826 12:48:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:08.826 12:48:27 -- spdk/autotest.sh@32 -- # uname -s 00:03:08.826 12:48:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:08.826 12:48:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:08.826 12:48:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:08.826 12:48:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:08.826 12:48:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:08.826 12:48:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:09.394 12:48:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:09.394 12:48:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:09.394 12:48:28 -- spdk/autotest.sh@48 -- # udevadm_pid=93854 00:03:09.394 12:48:28 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:09.394 12:48:28 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:09.394 12:48:28 -- spdk/autotest.sh@54 -- # echo 93916 00:03:09.394 12:48:28 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:09.394 12:48:28 -- spdk/autotest.sh@56 -- # echo 93985 00:03:09.394 12:48:28 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:09.394 12:48:28 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:09.394 12:48:28 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:09.394 12:48:28 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:09.394 12:48:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:09.394 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:03:09.394 12:48:28 -- spdk/autotest.sh@70 -- # create_test_list 00:03:09.394 12:48:28 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:09.394 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:03:09.394 12:48:28 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:09.394 12:48:28 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:09.394 12:48:28 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:09.394 12:48:28 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:09.394 12:48:28 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:09.394 12:48:28 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:09.394 12:48:28 -- common/autotest_common.sh@1440 -- # uname 00:03:09.394 12:48:28 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:09.394 12:48:28 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:09.394 12:48:28 -- common/autotest_common.sh@1460 -- # uname 00:03:09.394 12:48:28 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:09.394 12:48:28 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:09.394 12:48:28 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:09.394 12:48:28 -- spdk/autotest.sh@83 -- # hash lcov 00:03:09.394 12:48:28 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:09.394 12:48:28 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:09.394 --rc lcov_branch_coverage=1 00:03:09.394 --rc lcov_function_coverage=1 00:03:09.394 --rc genhtml_branch_coverage=1 00:03:09.394 --rc genhtml_function_coverage=1 00:03:09.394 --rc genhtml_legend=1 00:03:09.394 --rc geninfo_all_blocks=1 00:03:09.394 ' 00:03:09.394 12:48:28 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:09.394 --rc lcov_branch_coverage=1 00:03:09.394 --rc lcov_function_coverage=1 00:03:09.394 --rc genhtml_branch_coverage=1 00:03:09.394 --rc genhtml_function_coverage=1 00:03:09.394 --rc genhtml_legend=1 00:03:09.394 --rc geninfo_all_blocks=1 00:03:09.394 ' 00:03:09.394 12:48:28 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:09.394 --rc lcov_branch_coverage=1 00:03:09.394 --rc lcov_function_coverage=1 00:03:09.394 --rc genhtml_branch_coverage=1 00:03:09.394 --rc genhtml_function_coverage=1 00:03:09.394 --rc genhtml_legend=1 00:03:09.394 --rc geninfo_all_blocks=1 00:03:09.394 --no-external' 00:03:09.394 12:48:28 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:09.394 --rc lcov_branch_coverage=1 00:03:09.394 --rc lcov_function_coverage=1 00:03:09.394 --rc genhtml_branch_coverage=1 00:03:09.394 --rc genhtml_function_coverage=1 00:03:09.394 --rc genhtml_legend=1 00:03:09.394 --rc geninfo_all_blocks=1 00:03:09.394 --no-external' 00:03:09.394 12:48:28 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:09.394 lcov: LCOV version 1.15 00:03:09.394 12:48:28 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:11.298 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:11.298 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:11.298 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:11.298 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:11.298 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:11.298 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:11.298 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:11.298 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:11.299 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:11.299 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:11.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:11.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:11.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:11.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:11.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:11.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:11.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:11.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:11.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:11.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:11.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:11.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:11.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:11.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:11.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:11.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:11.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:11.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:11.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:11.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:58.230 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:58.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:58.230 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:58.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:58.230 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:58.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:58.230 12:49:15 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:58.230 12:49:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:58.230 12:49:15 -- common/autotest_common.sh@10 -- # set +x 00:03:58.230 12:49:15 -- spdk/autotest.sh@102 -- # rm -f 00:03:58.230 12:49:15 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:03:58.230 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:58.230 12:49:16 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:58.230 12:49:16 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:58.230 12:49:16 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:58.230 12:49:16 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:58.230 12:49:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:58.230 12:49:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:58.230 12:49:16 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:58.230 12:49:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.230 12:49:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:58.230 12:49:16 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:58.230 12:49:16 -- spdk/autotest.sh@121 -- # grep -v p 00:03:58.230 12:49:16 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:58.230 12:49:16 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:58.230 12:49:16 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:58.230 12:49:16 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:58.230 12:49:16 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:58.230 12:49:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:58.230 No valid GPT data, bailing 00:03:58.230 12:49:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.230 12:49:16 -- scripts/common.sh@393 -- # pt= 00:03:58.230 12:49:16 -- scripts/common.sh@394 -- # return 1 00:03:58.230 12:49:16 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:58.230 1+0 records in 00:03:58.230 1+0 records out 00:03:58.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240908 s, 43.5 MB/s 00:03:58.230 12:49:16 -- spdk/autotest.sh@129 -- # sync 00:03:58.230 12:49:16 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:58.230 12:49:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:58.230 12:49:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.796 12:49:17 -- spdk/autotest.sh@135 -- # uname -s 00:03:58.796 12:49:17 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:58.796 12:49:17 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:58.796 12:49:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.796 12:49:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.796 12:49:17 -- common/autotest_common.sh@10 -- # set +x 00:03:58.796 ************************************ 00:03:58.796 START TEST setup.sh 00:03:58.796 ************************************ 00:03:58.796 12:49:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:59.054 * Looking for test storage... 00:03:59.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:59.054 12:49:17 -- setup/test-setup.sh@10 -- # uname -s 00:03:59.054 12:49:17 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:59.054 12:49:17 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:59.054 12:49:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.054 12:49:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.054 12:49:17 -- common/autotest_common.sh@10 -- # set +x 00:03:59.054 ************************************ 00:03:59.054 START TEST acl 00:03:59.054 ************************************ 00:03:59.054 12:49:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:59.054 * Looking for test storage... 00:03:59.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:59.054 12:49:17 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:59.054 12:49:17 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:59.054 12:49:17 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:59.054 12:49:17 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:59.054 12:49:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:59.054 12:49:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:59.054 12:49:17 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:59.054 12:49:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.054 12:49:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:59.054 12:49:17 -- setup/acl.sh@12 -- # devs=() 00:03:59.054 12:49:17 -- setup/acl.sh@12 -- # declare -a devs 00:03:59.054 12:49:17 -- setup/acl.sh@13 -- # drivers=() 00:03:59.054 12:49:17 -- setup/acl.sh@13 -- # declare -A drivers 00:03:59.054 12:49:17 -- setup/acl.sh@51 -- # setup reset 00:03:59.054 12:49:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.054 12:49:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.620 12:49:18 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:59.620 12:49:18 -- setup/acl.sh@16 -- # local dev driver 00:03:59.620 12:49:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.620 12:49:18 -- setup/acl.sh@15 -- # setup output status 00:03:59.620 12:49:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.620 12:49:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:59.620 Hugepages 00:03:59.620 node hugesize free / total 00:03:59.620 12:49:18 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:59.620 12:49:18 -- setup/acl.sh@19 -- # continue 00:03:59.620 12:49:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.620 00:03:59.620 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.620 12:49:18 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:59.620 12:49:18 -- setup/acl.sh@19 -- # continue 00:03:59.620 12:49:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.620 12:49:18 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:59.620 12:49:18 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:59.620 12:49:18 -- setup/acl.sh@20 -- # continue 00:03:59.620 12:49:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.878 12:49:18 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:59.878 12:49:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:59.878 12:49:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:59.878 12:49:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:59.878 12:49:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:59.878 12:49:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.878 12:49:18 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:59.878 12:49:18 -- setup/acl.sh@54 -- # run_test denied denied 00:03:59.878 12:49:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.878 12:49:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.878 12:49:18 -- common/autotest_common.sh@10 -- # set +x 00:03:59.878 ************************************ 00:03:59.878 START TEST denied 00:03:59.878 ************************************ 00:03:59.878 12:49:18 -- common/autotest_common.sh@1104 -- # denied 00:03:59.878 12:49:18 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:59.878 12:49:18 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:59.878 12:49:18 -- setup/acl.sh@38 -- # setup output config 00:03:59.878 12:49:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.878 12:49:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.256 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:01.256 12:49:19 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:01.256 12:49:19 -- setup/acl.sh@28 -- # local dev driver 00:04:01.256 12:49:19 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.256 12:49:19 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:01.256 12:49:19 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:01.256 12:49:19 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.256 12:49:19 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.256 12:49:19 -- setup/acl.sh@41 -- # setup reset 00:04:01.256 12:49:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.256 12:49:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.518 00:04:01.518 real 0m1.877s 00:04:01.518 user 0m0.519s 00:04:01.518 sys 0m1.409s 00:04:01.518 12:49:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.518 ************************************ 00:04:01.518 END TEST denied 00:04:01.518 ************************************ 00:04:01.518 12:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:01.776 12:49:20 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:01.776 12:49:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.776 12:49:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.776 12:49:20 -- common/autotest_common.sh@10 -- # set +x 00:04:01.776 ************************************ 00:04:01.776 START TEST allowed 00:04:01.776 ************************************ 00:04:01.776 12:49:20 -- common/autotest_common.sh@1104 -- # allowed 00:04:01.776 12:49:20 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:01.776 12:49:20 -- setup/acl.sh@45 -- # setup output config 00:04:01.776 12:49:20 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:01.776 12:49:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.776 12:49:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:03.156 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.156 12:49:21 -- setup/acl.sh@47 -- # verify 00:04:03.156 12:49:21 -- setup/acl.sh@28 -- # local dev driver 00:04:03.156 12:49:21 -- setup/acl.sh@48 -- # setup reset 00:04:03.156 12:49:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.156 12:49:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.722 00:04:03.722 real 0m1.994s 00:04:03.722 user 0m0.459s 00:04:03.722 sys 0m1.498s 00:04:03.722 12:49:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.722 ************************************ 00:04:03.722 END TEST allowed 00:04:03.722 ************************************ 00:04:03.722 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.722 00:04:03.722 real 0m4.769s 00:04:03.722 user 0m1.534s 00:04:03.722 sys 0m3.287s 00:04:03.722 12:49:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.722 ************************************ 00:04:03.722 END TEST acl 00:04:03.722 ************************************ 00:04:03.722 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.722 12:49:22 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:03.722 12:49:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.722 12:49:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.722 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.722 ************************************ 00:04:03.722 START TEST hugepages 00:04:03.722 ************************************ 00:04:03.722 12:49:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:03.722 * Looking for test storage... 00:04:03.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:03.722 12:49:22 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:03.722 12:49:22 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:03.722 12:49:22 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:03.722 12:49:22 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:03.722 12:49:22 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:03.722 12:49:22 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:03.722 12:49:22 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:03.722 12:49:22 -- setup/common.sh@18 -- # local node= 00:04:03.722 12:49:22 -- setup/common.sh@19 -- # local var val 00:04:03.722 12:49:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.722 12:49:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.722 12:49:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.722 12:49:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.722 12:49:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.722 12:49:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.722 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.722 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.722 12:49:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 3089744 kB' 'MemAvailable: 7409500 kB' 'Buffers: 37596 kB' 'Cached: 4408280 kB' 'SwapCached: 0 kB' 'Active: 1198220 kB' 'Inactive: 3372344 kB' 'Active(anon): 133844 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064376 kB' 'Inactive(file): 3370556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 588 kB' 'Writeback: 0 kB' 'AnonPages: 143020 kB' 'Mapped: 73248 kB' 'Shmem: 2620 kB' 'KReclaimable: 206688 kB' 'Slab: 298384 kB' 'SReclaimable: 206688 kB' 'SUnreclaim: 91696 kB' 'KernelStack: 4832 kB' 'PageTables: 4360 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028396 kB' 'Committed_AS: 615900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14436 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:03.722 12:49:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.722 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.722 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.722 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.722 12:49:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.722 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.722 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.722 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.722 12:49:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.722 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.722 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.722 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.722 12:49:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.723 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.723 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.723 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.723 12:49:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.723 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.723 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.723 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.723 12:49:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.723 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.723 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.723 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.723 12:49:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 12:49:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # continue 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 12:49:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 12:49:22 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:03.983 12:49:22 -- setup/common.sh@33 -- # echo 2048 00:04:03.983 12:49:22 -- setup/common.sh@33 -- # return 0 00:04:03.983 12:49:22 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:03.983 12:49:22 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:03.983 12:49:22 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:03.983 12:49:22 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:03.983 12:49:22 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:03.983 12:49:22 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:03.983 12:49:22 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:03.983 12:49:22 -- setup/hugepages.sh@207 -- # get_nodes 00:04:03.983 12:49:22 -- setup/hugepages.sh@27 -- # local node 00:04:03.983 12:49:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.983 12:49:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:03.983 12:49:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.983 12:49:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.983 12:49:22 -- setup/hugepages.sh@208 -- # clear_hp 00:04:03.983 12:49:22 -- setup/hugepages.sh@37 -- # local node hp 00:04:03.983 12:49:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.983 12:49:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.983 12:49:22 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.983 12:49:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.983 12:49:22 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.983 12:49:22 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:03.983 12:49:22 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:03.983 12:49:22 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:03.983 12:49:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.983 12:49:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.983 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.983 ************************************ 00:04:03.983 START TEST default_setup 00:04:03.983 ************************************ 00:04:03.983 12:49:22 -- common/autotest_common.sh@1104 -- # default_setup 00:04:03.983 12:49:22 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:03.983 12:49:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.983 12:49:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.983 12:49:22 -- setup/hugepages.sh@51 -- # shift 00:04:03.983 12:49:22 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:03.983 12:49:22 -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.983 12:49:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.983 12:49:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.983 12:49:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.983 12:49:22 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:03.983 12:49:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.983 12:49:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.983 12:49:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.983 12:49:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.983 12:49:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.983 12:49:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.983 12:49:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.983 12:49:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.983 12:49:22 -- setup/hugepages.sh@73 -- # return 0 00:04:03.983 12:49:22 -- setup/hugepages.sh@137 -- # setup output 00:04:03.983 12:49:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.983 12:49:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:04.500 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.070 12:49:23 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:05.070 12:49:23 -- setup/hugepages.sh@89 -- # local node 00:04:05.070 12:49:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.070 12:49:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.070 12:49:23 -- setup/hugepages.sh@92 -- # local surp 00:04:05.070 12:49:23 -- setup/hugepages.sh@93 -- # local resv 00:04:05.070 12:49:23 -- setup/hugepages.sh@94 -- # local anon 00:04:05.070 12:49:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.070 12:49:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.070 12:49:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.070 12:49:23 -- setup/common.sh@18 -- # local node= 00:04:05.070 12:49:23 -- setup/common.sh@19 -- # local var val 00:04:05.070 12:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.070 12:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.070 12:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.070 12:49:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.070 12:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.070 12:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5183752 kB' 'MemAvailable: 9503528 kB' 'Buffers: 37596 kB' 'Cached: 4408204 kB' 'SwapCached: 0 kB' 'Active: 1204176 kB' 'Inactive: 3372352 kB' 'Active(anon): 139760 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370564 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149144 kB' 'Mapped: 72960 kB' 'Shmem: 2616 kB' 'KReclaimable: 206660 kB' 'Slab: 298724 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 92064 kB' 'KernelStack: 4640 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 622344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14436 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.070 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.070 12:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.071 12:49:23 -- setup/common.sh@33 -- # echo 0 00:04:05.071 12:49:23 -- setup/common.sh@33 -- # return 0 00:04:05.071 12:49:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.071 12:49:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.071 12:49:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.071 12:49:23 -- setup/common.sh@18 -- # local node= 00:04:05.071 12:49:23 -- setup/common.sh@19 -- # local var val 00:04:05.071 12:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.071 12:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.071 12:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.071 12:49:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.071 12:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.071 12:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5184012 kB' 'MemAvailable: 9503788 kB' 'Buffers: 37596 kB' 'Cached: 4408204 kB' 'SwapCached: 0 kB' 'Active: 1204436 kB' 'Inactive: 3372352 kB' 'Active(anon): 140020 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370564 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149404 kB' 'Mapped: 72960 kB' 'Shmem: 2616 kB' 'KReclaimable: 206660 kB' 'Slab: 298724 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 92064 kB' 'KernelStack: 4640 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 627716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14452 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.071 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.071 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.072 12:49:23 -- setup/common.sh@33 -- # echo 0 00:04:05.072 12:49:23 -- setup/common.sh@33 -- # return 0 00:04:05.072 12:49:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.072 12:49:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.072 12:49:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.072 12:49:23 -- setup/common.sh@18 -- # local node= 00:04:05.072 12:49:23 -- setup/common.sh@19 -- # local var val 00:04:05.072 12:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.072 12:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.072 12:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.072 12:49:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.072 12:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.072 12:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5184012 kB' 'MemAvailable: 9503788 kB' 'Buffers: 37596 kB' 'Cached: 4408204 kB' 'SwapCached: 0 kB' 'Active: 1204696 kB' 'Inactive: 3372352 kB' 'Active(anon): 140280 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370564 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149664 kB' 'Mapped: 72960 kB' 'Shmem: 2616 kB' 'KReclaimable: 206660 kB' 'Slab: 298724 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 92064 kB' 'KernelStack: 4640 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 627716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14452 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.072 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.072 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.073 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.073 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.074 12:49:23 -- setup/common.sh@33 -- # echo 0 00:04:05.074 12:49:23 -- setup/common.sh@33 -- # return 0 00:04:05.074 nr_hugepages=1024 00:04:05.074 resv_hugepages=0 00:04:05.074 surplus_hugepages=0 00:04:05.074 anon_hugepages=0 00:04:05.074 12:49:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.074 12:49:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.074 12:49:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.074 12:49:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.074 12:49:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.074 12:49:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.074 12:49:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.074 12:49:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.074 12:49:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.074 12:49:23 -- setup/common.sh@18 -- # local node= 00:04:05.074 12:49:23 -- setup/common.sh@19 -- # local var val 00:04:05.074 12:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.074 12:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.074 12:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.074 12:49:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.074 12:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.074 12:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5184272 kB' 'MemAvailable: 9504048 kB' 'Buffers: 37596 kB' 'Cached: 4408204 kB' 'SwapCached: 0 kB' 'Active: 1204436 kB' 'Inactive: 3372352 kB' 'Active(anon): 140020 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370564 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149536 kB' 'Mapped: 72960 kB' 'Shmem: 2616 kB' 'KReclaimable: 206660 kB' 'Slab: 298724 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 92064 kB' 'KernelStack: 4708 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 632556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14452 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.074 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.074 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.075 12:49:23 -- setup/common.sh@33 -- # echo 1024 00:04:05.075 12:49:23 -- setup/common.sh@33 -- # return 0 00:04:05.075 12:49:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.075 12:49:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.075 12:49:23 -- setup/hugepages.sh@27 -- # local node 00:04:05.075 12:49:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.075 12:49:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.075 12:49:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.075 12:49:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.075 12:49:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.075 12:49:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.075 12:49:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.075 12:49:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.075 12:49:23 -- setup/common.sh@18 -- # local node=0 00:04:05.075 12:49:23 -- setup/common.sh@19 -- # local var val 00:04:05.075 12:49:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.075 12:49:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.075 12:49:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.075 12:49:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.075 12:49:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.075 12:49:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5184272 kB' 'MemUsed: 7066828 kB' 'Active: 1204436 kB' 'Inactive: 3372352 kB' 'Active(anon): 140020 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370564 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 4445800 kB' 'Mapped: 72960 kB' 'AnonPages: 149408 kB' 'Shmem: 2616 kB' 'KernelStack: 4776 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206660 kB' 'Slab: 298724 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 92064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.075 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.075 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # continue 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.076 12:49:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.076 12:49:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.076 12:49:23 -- setup/common.sh@33 -- # echo 0 00:04:05.076 12:49:23 -- setup/common.sh@33 -- # return 0 00:04:05.076 node0=1024 expecting 1024 00:04:05.076 ************************************ 00:04:05.076 END TEST default_setup 00:04:05.076 ************************************ 00:04:05.076 12:49:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.076 12:49:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.076 12:49:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.076 12:49:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.076 12:49:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.076 12:49:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.076 00:04:05.076 real 0m1.116s 00:04:05.076 user 0m0.286s 00:04:05.076 sys 0m0.758s 00:04:05.076 12:49:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.076 12:49:23 -- common/autotest_common.sh@10 -- # set +x 00:04:05.076 12:49:23 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:05.076 12:49:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:05.076 12:49:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.076 12:49:23 -- common/autotest_common.sh@10 -- # set +x 00:04:05.076 ************************************ 00:04:05.076 START TEST per_node_1G_alloc 00:04:05.076 ************************************ 00:04:05.076 12:49:23 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:05.076 12:49:23 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:05.076 12:49:23 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:05.076 12:49:23 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.076 12:49:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.076 12:49:23 -- setup/hugepages.sh@51 -- # shift 00:04:05.076 12:49:23 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:05.076 12:49:23 -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.076 12:49:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.076 12:49:23 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.076 12:49:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.077 12:49:23 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:05.077 12:49:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.077 12:49:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.077 12:49:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.077 12:49:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.077 12:49:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.077 12:49:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.077 12:49:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.077 12:49:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:05.077 12:49:23 -- setup/hugepages.sh@73 -- # return 0 00:04:05.077 12:49:23 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:05.077 12:49:23 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:05.077 12:49:23 -- setup/hugepages.sh@146 -- # setup output 00:04:05.077 12:49:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.077 12:49:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:05.335 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.597 12:49:24 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:05.597 12:49:24 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:05.597 12:49:24 -- setup/hugepages.sh@89 -- # local node 00:04:05.597 12:49:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.597 12:49:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.597 12:49:24 -- setup/hugepages.sh@92 -- # local surp 00:04:05.597 12:49:24 -- setup/hugepages.sh@93 -- # local resv 00:04:05.597 12:49:24 -- setup/hugepages.sh@94 -- # local anon 00:04:05.597 12:49:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.597 12:49:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.597 12:49:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.597 12:49:24 -- setup/common.sh@18 -- # local node= 00:04:05.597 12:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.597 12:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.597 12:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.597 12:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.597 12:49:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.597 12:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.597 12:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6232032 kB' 'MemAvailable: 10551808 kB' 'Buffers: 37596 kB' 'Cached: 4408204 kB' 'SwapCached: 0 kB' 'Active: 1204544 kB' 'Inactive: 3372356 kB' 'Active(anon): 140132 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370568 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149780 kB' 'Mapped: 73324 kB' 'Shmem: 2616 kB' 'KReclaimable: 206660 kB' 'Slab: 298644 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 91984 kB' 'KernelStack: 4672 kB' 'PageTables: 3756 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 631068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.597 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.597 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.598 12:49:24 -- setup/common.sh@33 -- # echo 0 00:04:05.598 12:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.598 12:49:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.598 12:49:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.598 12:49:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.598 12:49:24 -- setup/common.sh@18 -- # local node= 00:04:05.598 12:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.598 12:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.598 12:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.598 12:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.598 12:49:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.598 12:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.598 12:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6232032 kB' 'MemAvailable: 10551808 kB' 'Buffers: 37596 kB' 'Cached: 4408204 kB' 'SwapCached: 0 kB' 'Active: 1204544 kB' 'Inactive: 3372356 kB' 'Active(anon): 140132 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370568 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149652 kB' 'Mapped: 73324 kB' 'Shmem: 2616 kB' 'KReclaimable: 206660 kB' 'Slab: 298644 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 91984 kB' 'KernelStack: 4672 kB' 'PageTables: 3756 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 631068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.598 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.598 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.599 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.599 12:49:24 -- setup/common.sh@33 -- # echo 0 00:04:05.599 12:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.599 12:49:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.599 12:49:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.599 12:49:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.599 12:49:24 -- setup/common.sh@18 -- # local node= 00:04:05.599 12:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.599 12:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.599 12:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.599 12:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.599 12:49:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.599 12:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.599 12:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.599 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6232040 kB' 'MemAvailable: 10551816 kB' 'Buffers: 37596 kB' 'Cached: 4408204 kB' 'SwapCached: 0 kB' 'Active: 1204672 kB' 'Inactive: 3372356 kB' 'Active(anon): 140260 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370568 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 150020 kB' 'Mapped: 73324 kB' 'Shmem: 2616 kB' 'KReclaimable: 206660 kB' 'Slab: 298644 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 91984 kB' 'KernelStack: 4656 kB' 'PageTables: 3732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 631068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.600 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.600 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.601 12:49:24 -- setup/common.sh@33 -- # echo 0 00:04:05.601 12:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.601 nr_hugepages=512 00:04:05.601 resv_hugepages=0 00:04:05.601 surplus_hugepages=0 00:04:05.601 anon_hugepages=0 00:04:05.601 12:49:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.601 12:49:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:05.601 12:49:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.601 12:49:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.601 12:49:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.601 12:49:24 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.601 12:49:24 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:05.601 12:49:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.601 12:49:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.601 12:49:24 -- setup/common.sh@18 -- # local node= 00:04:05.601 12:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.601 12:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.601 12:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.601 12:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.601 12:49:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.601 12:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.601 12:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6232300 kB' 'MemAvailable: 10552076 kB' 'Buffers: 37596 kB' 'Cached: 4408204 kB' 'SwapCached: 0 kB' 'Active: 1204932 kB' 'Inactive: 3372356 kB' 'Active(anon): 140520 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370568 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149892 kB' 'Mapped: 73324 kB' 'Shmem: 2616 kB' 'KReclaimable: 206660 kB' 'Slab: 298644 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 91984 kB' 'KernelStack: 4724 kB' 'PageTables: 3732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 635848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.601 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.601 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.602 12:49:24 -- setup/common.sh@33 -- # echo 512 00:04:05.602 12:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.602 12:49:24 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.602 12:49:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.602 12:49:24 -- setup/hugepages.sh@27 -- # local node 00:04:05.602 12:49:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.602 12:49:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.602 12:49:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.602 12:49:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.602 12:49:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.602 12:49:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.602 12:49:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.602 12:49:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.602 12:49:24 -- setup/common.sh@18 -- # local node=0 00:04:05.602 12:49:24 -- setup/common.sh@19 -- # local var val 00:04:05.602 12:49:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.602 12:49:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.602 12:49:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.602 12:49:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.602 12:49:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.602 12:49:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.602 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.602 12:49:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6232692 kB' 'MemUsed: 6018408 kB' 'Active: 1204396 kB' 'Inactive: 3372360 kB' 'Active(anon): 139980 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370568 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 4445800 kB' 'Mapped: 72964 kB' 'AnonPages: 149176 kB' 'Shmem: 2616 kB' 'KernelStack: 4696 kB' 'PageTables: 3536 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206660 kB' 'Slab: 298532 kB' 'SReclaimable: 206660 kB' 'SUnreclaim: 91872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.602 12:49:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # continue 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.603 12:49:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.603 12:49:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.603 12:49:24 -- setup/common.sh@33 -- # echo 0 00:04:05.603 12:49:24 -- setup/common.sh@33 -- # return 0 00:04:05.603 node0=512 expecting 512 00:04:05.603 ************************************ 00:04:05.603 END TEST per_node_1G_alloc 00:04:05.603 ************************************ 00:04:05.603 12:49:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.603 12:49:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.603 12:49:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.603 12:49:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.603 12:49:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.603 12:49:24 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.603 00:04:05.603 real 0m0.651s 00:04:05.603 user 0m0.236s 00:04:05.603 sys 0m0.445s 00:04:05.603 12:49:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.603 12:49:24 -- common/autotest_common.sh@10 -- # set +x 00:04:05.861 12:49:24 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:05.861 12:49:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:05.861 12:49:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.861 12:49:24 -- common/autotest_common.sh@10 -- # set +x 00:04:05.861 ************************************ 00:04:05.861 START TEST even_2G_alloc 00:04:05.861 ************************************ 00:04:05.861 12:49:24 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:05.861 12:49:24 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:05.861 12:49:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.861 12:49:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.861 12:49:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.861 12:49:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.861 12:49:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.861 12:49:24 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:05.861 12:49:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.861 12:49:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.861 12:49:24 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.861 12:49:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.861 12:49:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.861 12:49:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.861 12:49:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.861 12:49:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.861 12:49:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:05.861 12:49:24 -- setup/hugepages.sh@83 -- # : 0 00:04:05.861 12:49:24 -- setup/hugepages.sh@84 -- # : 0 00:04:05.861 12:49:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.861 12:49:24 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:05.861 12:49:24 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:05.861 12:49:24 -- setup/hugepages.sh@153 -- # setup output 00:04:05.861 12:49:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.861 12:49:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:06.119 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.689 12:49:25 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:06.689 12:49:25 -- setup/hugepages.sh@89 -- # local node 00:04:06.689 12:49:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.689 12:49:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.689 12:49:25 -- setup/hugepages.sh@92 -- # local surp 00:04:06.689 12:49:25 -- setup/hugepages.sh@93 -- # local resv 00:04:06.689 12:49:25 -- setup/hugepages.sh@94 -- # local anon 00:04:06.689 12:49:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.689 12:49:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.689 12:49:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.689 12:49:25 -- setup/common.sh@18 -- # local node= 00:04:06.689 12:49:25 -- setup/common.sh@19 -- # local var val 00:04:06.689 12:49:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.689 12:49:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.689 12:49:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.689 12:49:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.690 12:49:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.690 12:49:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5184376 kB' 'MemAvailable: 9504176 kB' 'Buffers: 37596 kB' 'Cached: 4408212 kB' 'SwapCached: 0 kB' 'Active: 1204572 kB' 'Inactive: 3372364 kB' 'Active(anon): 140156 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370572 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149784 kB' 'Mapped: 73008 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298360 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91684 kB' 'KernelStack: 4700 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 618732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14388 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.691 12:49:25 -- setup/common.sh@33 -- # echo 0 00:04:06.691 12:49:25 -- setup/common.sh@33 -- # return 0 00:04:06.691 12:49:25 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.691 12:49:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.691 12:49:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.691 12:49:25 -- setup/common.sh@18 -- # local node= 00:04:06.691 12:49:25 -- setup/common.sh@19 -- # local var val 00:04:06.691 12:49:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.691 12:49:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.691 12:49:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.691 12:49:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.691 12:49:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.691 12:49:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5184376 kB' 'MemAvailable: 9504176 kB' 'Buffers: 37596 kB' 'Cached: 4408212 kB' 'SwapCached: 0 kB' 'Active: 1204832 kB' 'Inactive: 3372364 kB' 'Active(anon): 140416 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370572 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 150044 kB' 'Mapped: 73008 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298360 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91684 kB' 'KernelStack: 4700 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 624104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.692 12:49:25 -- setup/common.sh@33 -- # echo 0 00:04:06.692 12:49:25 -- setup/common.sh@33 -- # return 0 00:04:06.692 12:49:25 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.692 12:49:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.692 12:49:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.692 12:49:25 -- setup/common.sh@18 -- # local node= 00:04:06.692 12:49:25 -- setup/common.sh@19 -- # local var val 00:04:06.692 12:49:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.692 12:49:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.692 12:49:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.692 12:49:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.692 12:49:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.692 12:49:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5184864 kB' 'MemAvailable: 9504664 kB' 'Buffers: 37596 kB' 'Cached: 4408212 kB' 'SwapCached: 0 kB' 'Active: 1204572 kB' 'Inactive: 3372364 kB' 'Active(anon): 140156 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370572 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149240 kB' 'Mapped: 72972 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298360 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91684 kB' 'KernelStack: 4640 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 624104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 12:49:25 -- setup/common.sh@33 -- # echo 0 00:04:06.693 12:49:25 -- setup/common.sh@33 -- # return 0 00:04:06.693 12:49:25 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.693 nr_hugepages=1024 00:04:06.693 resv_hugepages=0 00:04:06.693 surplus_hugepages=0 00:04:06.693 12:49:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.693 12:49:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.693 12:49:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.693 anon_hugepages=0 00:04:06.693 12:49:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.693 12:49:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.693 12:49:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.693 12:49:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.693 12:49:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.693 12:49:25 -- setup/common.sh@18 -- # local node= 00:04:06.694 12:49:25 -- setup/common.sh@19 -- # local var val 00:04:06.694 12:49:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.694 12:49:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.694 12:49:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.694 12:49:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.694 12:49:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.694 12:49:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5185116 kB' 'MemAvailable: 9504920 kB' 'Buffers: 37596 kB' 'Cached: 4408212 kB' 'SwapCached: 0 kB' 'Active: 1204396 kB' 'Inactive: 3372368 kB' 'Active(anon): 139980 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370576 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149472 kB' 'Mapped: 72924 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298476 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91800 kB' 'KernelStack: 4676 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 623852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.694 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 12:49:25 -- setup/common.sh@33 -- # echo 1024 00:04:06.695 12:49:25 -- setup/common.sh@33 -- # return 0 00:04:06.695 12:49:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.695 12:49:25 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.695 12:49:25 -- setup/hugepages.sh@27 -- # local node 00:04:06.695 12:49:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.695 12:49:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.695 12:49:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.695 12:49:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.695 12:49:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.695 12:49:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.695 12:49:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.695 12:49:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.695 12:49:25 -- setup/common.sh@18 -- # local node=0 00:04:06.695 12:49:25 -- setup/common.sh@19 -- # local var val 00:04:06.695 12:49:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.695 12:49:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.695 12:49:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.695 12:49:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.695 12:49:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.695 12:49:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5185180 kB' 'MemUsed: 7065920 kB' 'Active: 1204320 kB' 'Inactive: 3372364 kB' 'Active(anon): 139904 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370572 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 4445804 kB' 'Mapped: 72936 kB' 'AnonPages: 149484 kB' 'Shmem: 2616 kB' 'KernelStack: 4644 kB' 'PageTables: 3564 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206676 kB' 'Slab: 298508 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # continue 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 12:49:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 12:49:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 12:49:25 -- setup/common.sh@33 -- # echo 0 00:04:06.696 12:49:25 -- setup/common.sh@33 -- # return 0 00:04:06.696 12:49:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.696 12:49:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.696 12:49:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.696 node0=1024 expecting 1024 00:04:06.696 12:49:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.696 12:49:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.696 12:49:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.696 00:04:06.696 real 0m0.908s 00:04:06.696 user 0m0.260s 00:04:06.696 sys 0m0.681s 00:04:06.696 12:49:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.696 12:49:25 -- common/autotest_common.sh@10 -- # set +x 00:04:06.696 ************************************ 00:04:06.696 END TEST even_2G_alloc 00:04:06.696 ************************************ 00:04:06.696 12:49:25 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:06.696 12:49:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.696 12:49:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.696 12:49:25 -- common/autotest_common.sh@10 -- # set +x 00:04:06.696 ************************************ 00:04:06.696 START TEST odd_alloc 00:04:06.696 ************************************ 00:04:06.696 12:49:25 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:06.696 12:49:25 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:06.696 12:49:25 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:06.696 12:49:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.696 12:49:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.696 12:49:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:06.696 12:49:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.696 12:49:25 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:06.696 12:49:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.696 12:49:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:06.696 12:49:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:06.696 12:49:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.696 12:49:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.696 12:49:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.696 12:49:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.696 12:49:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.696 12:49:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:06.696 12:49:25 -- setup/hugepages.sh@83 -- # : 0 00:04:06.696 12:49:25 -- setup/hugepages.sh@84 -- # : 0 00:04:06.696 12:49:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.696 12:49:25 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:06.696 12:49:25 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:06.696 12:49:25 -- setup/hugepages.sh@160 -- # setup output 00:04:06.696 12:49:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.696 12:49:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.955 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:06.955 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.525 12:49:26 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:07.525 12:49:26 -- setup/hugepages.sh@89 -- # local node 00:04:07.525 12:49:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.525 12:49:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.525 12:49:26 -- setup/hugepages.sh@92 -- # local surp 00:04:07.525 12:49:26 -- setup/hugepages.sh@93 -- # local resv 00:04:07.525 12:49:26 -- setup/hugepages.sh@94 -- # local anon 00:04:07.525 12:49:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.525 12:49:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.525 12:49:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.525 12:49:26 -- setup/common.sh@18 -- # local node= 00:04:07.525 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:07.525 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.525 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.525 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.525 12:49:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.525 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.525 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5181888 kB' 'MemAvailable: 9501696 kB' 'Buffers: 37596 kB' 'Cached: 4408212 kB' 'SwapCached: 0 kB' 'Active: 1204524 kB' 'Inactive: 3372372 kB' 'Active(anon): 140112 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370580 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 150016 kB' 'Mapped: 73220 kB' 'Shmem: 2616 kB' 'KReclaimable: 206680 kB' 'Slab: 298664 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 91984 kB' 'KernelStack: 4796 kB' 'PageTables: 3996 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 625824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.525 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.525 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.526 12:49:26 -- setup/common.sh@33 -- # echo 0 00:04:07.526 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:07.526 12:49:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.526 12:49:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.526 12:49:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.526 12:49:26 -- setup/common.sh@18 -- # local node= 00:04:07.526 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:07.526 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.526 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.526 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.526 12:49:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.526 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.526 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5181896 kB' 'MemAvailable: 9501704 kB' 'Buffers: 37596 kB' 'Cached: 4408212 kB' 'SwapCached: 0 kB' 'Active: 1204384 kB' 'Inactive: 3372372 kB' 'Active(anon): 139972 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370580 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149476 kB' 'Mapped: 73220 kB' 'Shmem: 2616 kB' 'KReclaimable: 206680 kB' 'Slab: 298664 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 91984 kB' 'KernelStack: 4780 kB' 'PageTables: 3960 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 625824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.526 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.526 12:49:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.527 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.527 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.528 12:49:26 -- setup/common.sh@33 -- # echo 0 00:04:07.528 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:07.528 12:49:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.528 12:49:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.528 12:49:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.528 12:49:26 -- setup/common.sh@18 -- # local node= 00:04:07.528 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:07.528 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.528 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.528 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.528 12:49:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.528 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.528 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5182212 kB' 'MemAvailable: 9502016 kB' 'Buffers: 37596 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1204628 kB' 'Inactive: 3372368 kB' 'Active(anon): 140216 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370576 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149528 kB' 'Mapped: 73216 kB' 'Shmem: 2616 kB' 'KReclaimable: 206680 kB' 'Slab: 298668 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 91988 kB' 'KernelStack: 4772 kB' 'PageTables: 4152 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 625824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.528 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.528 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.529 12:49:26 -- setup/common.sh@33 -- # echo 0 00:04:07.529 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:07.529 12:49:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.529 12:49:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:07.529 nr_hugepages=1025 00:04:07.529 resv_hugepages=0 00:04:07.529 surplus_hugepages=0 00:04:07.529 12:49:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.529 12:49:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.529 anon_hugepages=0 00:04:07.529 12:49:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.529 12:49:26 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:07.529 12:49:26 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:07.529 12:49:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.529 12:49:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.529 12:49:26 -- setup/common.sh@18 -- # local node= 00:04:07.529 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:07.529 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.529 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.529 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.529 12:49:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.529 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.529 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5182472 kB' 'MemAvailable: 9502276 kB' 'Buffers: 37596 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1204876 kB' 'Inactive: 3372368 kB' 'Active(anon): 140464 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370576 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 149668 kB' 'Mapped: 73052 kB' 'Shmem: 2616 kB' 'KReclaimable: 206680 kB' 'Slab: 298668 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 91988 kB' 'KernelStack: 4792 kB' 'PageTables: 4060 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 625828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14436 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.529 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.529 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.530 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.530 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.530 12:49:26 -- setup/common.sh@33 -- # echo 1025 00:04:07.530 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:07.530 12:49:26 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:07.530 12:49:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.530 12:49:26 -- setup/hugepages.sh@27 -- # local node 00:04:07.530 12:49:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.530 12:49:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:07.531 12:49:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.531 12:49:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.531 12:49:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.531 12:49:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.531 12:49:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.531 12:49:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.531 12:49:26 -- setup/common.sh@18 -- # local node=0 00:04:07.531 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:07.531 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.531 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.531 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.531 12:49:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.531 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.531 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5182440 kB' 'MemUsed: 7068660 kB' 'Active: 1204740 kB' 'Inactive: 3372364 kB' 'Active(anon): 140324 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064416 kB' 'Inactive(file): 3370572 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 4445804 kB' 'Mapped: 72904 kB' 'AnonPages: 149648 kB' 'Shmem: 2616 kB' 'KernelStack: 4740 kB' 'PageTables: 4080 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206680 kB' 'Slab: 298740 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 92060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # continue 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.531 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.531 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.531 12:49:26 -- setup/common.sh@33 -- # echo 0 00:04:07.531 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:07.531 12:49:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.531 12:49:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.532 12:49:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.532 12:49:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.532 node0=1025 expecting 1025 00:04:07.532 12:49:26 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:07.532 12:49:26 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:07.532 00:04:07.532 real 0m0.890s 00:04:07.532 user 0m0.259s 00:04:07.532 sys 0m0.663s 00:04:07.532 12:49:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.532 ************************************ 00:04:07.532 END TEST odd_alloc 00:04:07.532 ************************************ 00:04:07.532 12:49:26 -- common/autotest_common.sh@10 -- # set +x 00:04:07.532 12:49:26 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:07.532 12:49:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.532 12:49:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.532 12:49:26 -- common/autotest_common.sh@10 -- # set +x 00:04:07.532 ************************************ 00:04:07.532 START TEST custom_alloc 00:04:07.532 ************************************ 00:04:07.532 12:49:26 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:07.532 12:49:26 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:07.532 12:49:26 -- setup/hugepages.sh@169 -- # local node 00:04:07.532 12:49:26 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:07.532 12:49:26 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:07.532 12:49:26 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:07.532 12:49:26 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:07.532 12:49:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:07.532 12:49:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:07.532 12:49:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.532 12:49:26 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:07.532 12:49:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.532 12:49:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:07.532 12:49:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:07.532 12:49:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.532 12:49:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.532 12:49:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.532 12:49:26 -- setup/hugepages.sh@83 -- # : 0 00:04:07.532 12:49:26 -- setup/hugepages.sh@84 -- # : 0 00:04:07.532 12:49:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:07.532 12:49:26 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:07.532 12:49:26 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:07.532 12:49:26 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:07.532 12:49:26 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:07.532 12:49:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.532 12:49:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:07.532 12:49:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:07.532 12:49:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.532 12:49:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.532 12:49:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:07.532 12:49:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:07.532 12:49:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:07.532 12:49:26 -- setup/hugepages.sh@78 -- # return 0 00:04:07.532 12:49:26 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:07.532 12:49:26 -- setup/hugepages.sh@187 -- # setup output 00:04:07.532 12:49:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.532 12:49:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:07.813 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:08.427 12:49:26 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:08.427 12:49:26 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:08.427 12:49:26 -- setup/hugepages.sh@89 -- # local node 00:04:08.427 12:49:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.427 12:49:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.427 12:49:26 -- setup/hugepages.sh@92 -- # local surp 00:04:08.427 12:49:26 -- setup/hugepages.sh@93 -- # local resv 00:04:08.427 12:49:26 -- setup/hugepages.sh@94 -- # local anon 00:04:08.427 12:49:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.427 12:49:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.427 12:49:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.427 12:49:26 -- setup/common.sh@18 -- # local node= 00:04:08.427 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:08.427 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.427 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.427 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.427 12:49:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.427 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.427 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6246460 kB' 'MemAvailable: 10566264 kB' 'Buffers: 37596 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191584 kB' 'Inactive: 3372368 kB' 'Active(anon): 127172 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370576 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 136252 kB' 'Mapped: 72744 kB' 'Shmem: 2616 kB' 'KReclaimable: 206680 kB' 'Slab: 298688 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 92008 kB' 'KernelStack: 4616 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 593720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14180 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.427 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.427 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.428 12:49:26 -- setup/common.sh@33 -- # echo 0 00:04:08.428 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:08.428 12:49:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.428 12:49:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.428 12:49:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.428 12:49:26 -- setup/common.sh@18 -- # local node= 00:04:08.428 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:08.428 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.428 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.428 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.428 12:49:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.428 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.428 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6246200 kB' 'MemAvailable: 10566004 kB' 'Buffers: 37596 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191844 kB' 'Inactive: 3372368 kB' 'Active(anon): 127432 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370576 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 136124 kB' 'Mapped: 72744 kB' 'Shmem: 2616 kB' 'KReclaimable: 206680 kB' 'Slab: 298688 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 92008 kB' 'KernelStack: 4616 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 593720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14180 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.428 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.428 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.429 12:49:26 -- setup/common.sh@33 -- # echo 0 00:04:08.429 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:08.429 12:49:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.429 12:49:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.429 12:49:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.429 12:49:26 -- setup/common.sh@18 -- # local node= 00:04:08.429 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:08.429 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.429 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.429 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.429 12:49:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.429 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.429 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6246812 kB' 'MemAvailable: 10566616 kB' 'Buffers: 37596 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191040 kB' 'Inactive: 3372368 kB' 'Active(anon): 126628 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370576 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 135868 kB' 'Mapped: 72336 kB' 'Shmem: 2616 kB' 'KReclaimable: 206680 kB' 'Slab: 298688 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 92008 kB' 'KernelStack: 4552 kB' 'PageTables: 3172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 593720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14180 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.429 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.429 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.430 12:49:26 -- setup/common.sh@33 -- # echo 0 00:04:08.430 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:08.430 12:49:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.430 nr_hugepages=512 00:04:08.430 resv_hugepages=0 00:04:08.430 surplus_hugepages=0 00:04:08.430 12:49:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:08.430 12:49:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.430 12:49:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.430 anon_hugepages=0 00:04:08.430 12:49:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.430 12:49:26 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:08.430 12:49:26 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:08.430 12:49:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.430 12:49:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.430 12:49:26 -- setup/common.sh@18 -- # local node= 00:04:08.430 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:08.430 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.430 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.430 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.430 12:49:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.430 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.430 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6246836 kB' 'MemAvailable: 10566640 kB' 'Buffers: 37596 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191032 kB' 'Inactive: 3372368 kB' 'Active(anon): 126620 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370576 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 135956 kB' 'Mapped: 72124 kB' 'Shmem: 2616 kB' 'KReclaimable: 206680 kB' 'Slab: 298656 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 91976 kB' 'KernelStack: 4424 kB' 'PageTables: 2556 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 598592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14196 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.430 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.430 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.431 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.431 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.431 12:49:26 -- setup/common.sh@33 -- # echo 512 00:04:08.431 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:08.431 12:49:26 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:08.431 12:49:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.431 12:49:26 -- setup/hugepages.sh@27 -- # local node 00:04:08.431 12:49:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.431 12:49:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.431 12:49:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:08.431 12:49:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.431 12:49:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.431 12:49:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.432 12:49:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.432 12:49:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.432 12:49:26 -- setup/common.sh@18 -- # local node=0 00:04:08.432 12:49:26 -- setup/common.sh@19 -- # local var val 00:04:08.432 12:49:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.432 12:49:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.432 12:49:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.432 12:49:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.432 12:49:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.432 12:49:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 6246576 kB' 'MemUsed: 6004524 kB' 'Active: 1191032 kB' 'Inactive: 3372368 kB' 'Active(anon): 126620 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064412 kB' 'Inactive(file): 3370576 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 4445804 kB' 'Mapped: 72124 kB' 'AnonPages: 136216 kB' 'Shmem: 2616 kB' 'KernelStack: 4424 kB' 'PageTables: 2556 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206680 kB' 'Slab: 298656 kB' 'SReclaimable: 206680 kB' 'SUnreclaim: 91976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # continue 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.432 12:49:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.432 12:49:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.432 12:49:26 -- setup/common.sh@33 -- # echo 0 00:04:08.432 12:49:26 -- setup/common.sh@33 -- # return 0 00:04:08.432 12:49:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.432 12:49:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.432 12:49:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.432 12:49:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.432 node0=512 expecting 512 00:04:08.432 12:49:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.432 12:49:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:08.432 00:04:08.432 real 0m0.658s 00:04:08.432 user 0m0.268s 00:04:08.432 sys 0m0.423s 00:04:08.432 12:49:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.432 ************************************ 00:04:08.432 END TEST custom_alloc 00:04:08.432 ************************************ 00:04:08.432 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:04:08.432 12:49:27 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:08.432 12:49:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.432 12:49:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.432 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:04:08.432 ************************************ 00:04:08.432 START TEST no_shrink_alloc 00:04:08.432 ************************************ 00:04:08.432 12:49:27 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:08.432 12:49:27 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:08.432 12:49:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.432 12:49:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.432 12:49:27 -- setup/hugepages.sh@51 -- # shift 00:04:08.432 12:49:27 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:08.432 12:49:27 -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.432 12:49:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.432 12:49:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.432 12:49:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.432 12:49:27 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:08.432 12:49:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.433 12:49:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.433 12:49:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:08.433 12:49:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.433 12:49:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.433 12:49:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.433 12:49:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.433 12:49:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:08.433 12:49:27 -- setup/hugepages.sh@73 -- # return 0 00:04:08.433 12:49:27 -- setup/hugepages.sh@198 -- # setup output 00:04:08.433 12:49:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.433 12:49:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:08.690 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.260 12:49:27 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:09.260 12:49:27 -- setup/hugepages.sh@89 -- # local node 00:04:09.260 12:49:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.260 12:49:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.260 12:49:27 -- setup/hugepages.sh@92 -- # local surp 00:04:09.260 12:49:27 -- setup/hugepages.sh@93 -- # local resv 00:04:09.260 12:49:27 -- setup/hugepages.sh@94 -- # local anon 00:04:09.260 12:49:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.260 12:49:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.260 12:49:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.260 12:49:27 -- setup/common.sh@18 -- # local node= 00:04:09.260 12:49:27 -- setup/common.sh@19 -- # local var val 00:04:09.260 12:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.260 12:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.260 12:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.260 12:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.260 12:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.260 12:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5199208 kB' 'MemAvailable: 9519008 kB' 'Buffers: 37604 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1190864 kB' 'Inactive: 3372320 kB' 'Active(anon): 126404 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064460 kB' 'Inactive(file): 3370528 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 136016 kB' 'Mapped: 72064 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298400 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91724 kB' 'KernelStack: 4388 kB' 'PageTables: 2904 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 602360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14196 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.260 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.260 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.261 12:49:27 -- setup/common.sh@33 -- # echo 0 00:04:09.261 12:49:27 -- setup/common.sh@33 -- # return 0 00:04:09.261 12:49:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.261 12:49:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.261 12:49:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.261 12:49:27 -- setup/common.sh@18 -- # local node= 00:04:09.261 12:49:27 -- setup/common.sh@19 -- # local var val 00:04:09.261 12:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.261 12:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.261 12:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.261 12:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.261 12:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.261 12:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5199160 kB' 'MemAvailable: 9518968 kB' 'Buffers: 37604 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191256 kB' 'Inactive: 3372320 kB' 'Active(anon): 126788 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064468 kB' 'Inactive(file): 3370528 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 136300 kB' 'Mapped: 72064 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298424 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91748 kB' 'KernelStack: 4452 kB' 'PageTables: 3040 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 596980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14164 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.261 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.261 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.262 12:49:27 -- setup/common.sh@33 -- # echo 0 00:04:09.262 12:49:27 -- setup/common.sh@33 -- # return 0 00:04:09.262 12:49:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.262 12:49:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.262 12:49:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.262 12:49:27 -- setup/common.sh@18 -- # local node= 00:04:09.262 12:49:27 -- setup/common.sh@19 -- # local var val 00:04:09.262 12:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.262 12:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.262 12:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.262 12:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.262 12:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.262 12:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5199436 kB' 'MemAvailable: 9519244 kB' 'Buffers: 37604 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191220 kB' 'Inactive: 3372320 kB' 'Active(anon): 126752 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064468 kB' 'Inactive(file): 3370528 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 136504 kB' 'Mapped: 72064 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298424 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91748 kB' 'KernelStack: 4420 kB' 'PageTables: 2992 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 591608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14180 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.262 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.262 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.263 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.263 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.264 12:49:27 -- setup/common.sh@33 -- # echo 0 00:04:09.264 12:49:27 -- setup/common.sh@33 -- # return 0 00:04:09.264 12:49:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.264 12:49:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.264 nr_hugepages=1024 00:04:09.264 resv_hugepages=0 00:04:09.264 12:49:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.264 surplus_hugepages=0 00:04:09.264 12:49:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.264 anon_hugepages=0 00:04:09.264 12:49:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.264 12:49:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.264 12:49:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.264 12:49:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.264 12:49:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.264 12:49:27 -- setup/common.sh@18 -- # local node= 00:04:09.264 12:49:27 -- setup/common.sh@19 -- # local var val 00:04:09.264 12:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.264 12:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.264 12:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.264 12:49:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.264 12:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.264 12:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5199476 kB' 'MemAvailable: 9519284 kB' 'Buffers: 37604 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1190820 kB' 'Inactive: 3372320 kB' 'Active(anon): 126352 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064468 kB' 'Inactive(file): 3370528 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 135716 kB' 'Mapped: 72064 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298424 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91748 kB' 'KernelStack: 4408 kB' 'PageTables: 2828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 591288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14180 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.264 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.264 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.265 12:49:27 -- setup/common.sh@33 -- # echo 1024 00:04:09.265 12:49:27 -- setup/common.sh@33 -- # return 0 00:04:09.265 12:49:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.265 12:49:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.265 12:49:27 -- setup/hugepages.sh@27 -- # local node 00:04:09.265 12:49:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.265 12:49:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.265 12:49:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.265 12:49:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.265 12:49:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.265 12:49:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.265 12:49:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.265 12:49:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.265 12:49:27 -- setup/common.sh@18 -- # local node=0 00:04:09.265 12:49:27 -- setup/common.sh@19 -- # local var val 00:04:09.265 12:49:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.265 12:49:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.265 12:49:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.265 12:49:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.265 12:49:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.265 12:49:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5199476 kB' 'MemUsed: 7051624 kB' 'Active: 1191080 kB' 'Inactive: 3372320 kB' 'Active(anon): 126612 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064468 kB' 'Inactive(file): 3370528 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'FilePages: 4445812 kB' 'Mapped: 72064 kB' 'AnonPages: 135976 kB' 'Shmem: 2616 kB' 'KernelStack: 4408 kB' 'PageTables: 2828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206676 kB' 'Slab: 298424 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.265 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.265 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # continue 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.266 12:49:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.266 12:49:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.266 12:49:27 -- setup/common.sh@33 -- # echo 0 00:04:09.266 12:49:27 -- setup/common.sh@33 -- # return 0 00:04:09.266 12:49:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.266 12:49:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.266 12:49:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.266 12:49:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.266 node0=1024 expecting 1024 00:04:09.266 12:49:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.266 12:49:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.266 12:49:27 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:09.266 12:49:27 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:09.266 12:49:27 -- setup/hugepages.sh@202 -- # setup output 00:04:09.266 12:49:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.266 12:49:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:09.527 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.527 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:09.527 12:49:28 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:09.527 12:49:28 -- setup/hugepages.sh@89 -- # local node 00:04:09.527 12:49:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.527 12:49:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.527 12:49:28 -- setup/hugepages.sh@92 -- # local surp 00:04:09.527 12:49:28 -- setup/hugepages.sh@93 -- # local resv 00:04:09.527 12:49:28 -- setup/hugepages.sh@94 -- # local anon 00:04:09.527 12:49:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.527 12:49:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.527 12:49:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.527 12:49:28 -- setup/common.sh@18 -- # local node= 00:04:09.527 12:49:28 -- setup/common.sh@19 -- # local var val 00:04:09.527 12:49:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.527 12:49:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.527 12:49:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.527 12:49:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.527 12:49:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.527 12:49:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.527 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.527 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.527 12:49:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5198256 kB' 'MemAvailable: 9518064 kB' 'Buffers: 37604 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191332 kB' 'Inactive: 3372292 kB' 'Active(anon): 126836 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3370500 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 136792 kB' 'Mapped: 72612 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298396 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91720 kB' 'KernelStack: 4616 kB' 'PageTables: 3536 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 586904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14196 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.528 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.528 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.529 12:49:28 -- setup/common.sh@33 -- # echo 0 00:04:09.529 12:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.529 12:49:28 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.529 12:49:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.529 12:49:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.529 12:49:28 -- setup/common.sh@18 -- # local node= 00:04:09.529 12:49:28 -- setup/common.sh@19 -- # local var val 00:04:09.529 12:49:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.529 12:49:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.529 12:49:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.529 12:49:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.529 12:49:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.529 12:49:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5198640 kB' 'MemAvailable: 9518448 kB' 'Buffers: 37604 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191080 kB' 'Inactive: 3372292 kB' 'Active(anon): 126584 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3370500 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 136116 kB' 'Mapped: 72148 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298288 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91612 kB' 'KernelStack: 4412 kB' 'PageTables: 2712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 586904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14196 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.529 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.529 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.530 12:49:28 -- setup/common.sh@33 -- # echo 0 00:04:09.530 12:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.530 12:49:28 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.530 12:49:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.530 12:49:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.530 12:49:28 -- setup/common.sh@18 -- # local node= 00:04:09.530 12:49:28 -- setup/common.sh@19 -- # local var val 00:04:09.530 12:49:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.530 12:49:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.530 12:49:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.530 12:49:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.530 12:49:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.530 12:49:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5198900 kB' 'MemAvailable: 9518708 kB' 'Buffers: 37604 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1191340 kB' 'Inactive: 3372292 kB' 'Active(anon): 126844 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3370500 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 135988 kB' 'Mapped: 72148 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298288 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91612 kB' 'KernelStack: 4412 kB' 'PageTables: 2712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 591784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14196 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.530 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.530 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.531 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.531 12:49:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.532 12:49:28 -- setup/common.sh@33 -- # echo 0 00:04:09.532 12:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.532 12:49:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.532 12:49:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.532 nr_hugepages=1024 00:04:09.532 resv_hugepages=0 00:04:09.532 12:49:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.532 surplus_hugepages=0 00:04:09.532 12:49:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.532 12:49:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.532 anon_hugepages=0 00:04:09.532 12:49:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.532 12:49:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.532 12:49:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.532 12:49:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.532 12:49:28 -- setup/common.sh@18 -- # local node= 00:04:09.532 12:49:28 -- setup/common.sh@19 -- # local var val 00:04:09.532 12:49:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.532 12:49:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.532 12:49:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.532 12:49:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.532 12:49:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.532 12:49:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5198876 kB' 'MemAvailable: 9518684 kB' 'Buffers: 37604 kB' 'Cached: 4408208 kB' 'SwapCached: 0 kB' 'Active: 1190948 kB' 'Inactive: 3372292 kB' 'Active(anon): 126452 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3370500 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 135800 kB' 'Mapped: 72144 kB' 'Shmem: 2616 kB' 'KReclaimable: 206676 kB' 'Slab: 298508 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91832 kB' 'KernelStack: 4412 kB' 'PageTables: 2732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 591696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14212 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 4040704 kB' 'DirectMap1G: 10485760 kB' 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.532 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.532 12:49:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.533 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.533 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.533 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.533 12:49:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.533 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.533 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.533 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.534 12:49:28 -- setup/common.sh@33 -- # echo 1024 00:04:09.534 12:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.534 12:49:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.534 12:49:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.534 12:49:28 -- setup/hugepages.sh@27 -- # local node 00:04:09.534 12:49:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.534 12:49:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.534 12:49:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.534 12:49:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.534 12:49:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.534 12:49:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.534 12:49:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.534 12:49:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.534 12:49:28 -- setup/common.sh@18 -- # local node=0 00:04:09.534 12:49:28 -- setup/common.sh@19 -- # local var val 00:04:09.534 12:49:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.534 12:49:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.534 12:49:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.534 12:49:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.534 12:49:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.534 12:49:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251100 kB' 'MemFree: 5198876 kB' 'MemUsed: 7052224 kB' 'Active: 1191208 kB' 'Inactive: 3372292 kB' 'Active(anon): 126712 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3370500 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'FilePages: 4445812 kB' 'Mapped: 72144 kB' 'AnonPages: 135800 kB' 'Shmem: 2616 kB' 'KernelStack: 4412 kB' 'PageTables: 2732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206676 kB' 'Slab: 298508 kB' 'SReclaimable: 206676 kB' 'SUnreclaim: 91832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.534 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.534 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.535 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.535 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.794 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.794 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.794 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.794 12:49:28 -- setup/common.sh@32 -- # continue 00:04:09.794 12:49:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.794 12:49:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.794 12:49:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.794 12:49:28 -- setup/common.sh@33 -- # echo 0 00:04:09.794 12:49:28 -- setup/common.sh@33 -- # return 0 00:04:09.794 12:49:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.794 12:49:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.794 12:49:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.794 12:49:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.794 node0=1024 expecting 1024 00:04:09.794 12:49:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.794 12:49:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.794 00:04:09.794 real 0m1.305s 00:04:09.794 user 0m0.502s 00:04:09.794 sys 0m0.869s 00:04:09.794 12:49:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.794 ************************************ 00:04:09.794 END TEST no_shrink_alloc 00:04:09.794 ************************************ 00:04:09.794 12:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:09.794 12:49:28 -- setup/hugepages.sh@217 -- # clear_hp 00:04:09.794 12:49:28 -- setup/hugepages.sh@37 -- # local node hp 00:04:09.794 12:49:28 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.794 12:49:28 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.794 12:49:28 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.794 12:49:28 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.794 12:49:28 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.794 12:49:28 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.794 12:49:28 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.794 ************************************ 00:04:09.794 END TEST hugepages 00:04:09.794 ************************************ 00:04:09.794 00:04:09.794 real 0m5.932s 00:04:09.794 user 0m2.036s 00:04:09.794 sys 0m4.003s 00:04:09.794 12:49:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.794 12:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:09.794 12:49:28 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:09.794 12:49:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.794 12:49:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.794 12:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:09.794 ************************************ 00:04:09.794 START TEST driver 00:04:09.794 ************************************ 00:04:09.794 12:49:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:09.794 * Looking for test storage... 00:04:09.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:09.794 12:49:28 -- setup/driver.sh@68 -- # setup reset 00:04:09.794 12:49:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.794 12:49:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.360 12:49:28 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:10.361 12:49:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.361 12:49:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.361 12:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:10.361 ************************************ 00:04:10.361 START TEST guess_driver 00:04:10.361 ************************************ 00:04:10.361 12:49:28 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:10.361 12:49:28 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:10.361 12:49:28 -- setup/driver.sh@47 -- # local fail=0 00:04:10.361 12:49:28 -- setup/driver.sh@49 -- # pick_driver 00:04:10.361 12:49:28 -- setup/driver.sh@36 -- # vfio 00:04:10.361 12:49:28 -- setup/driver.sh@21 -- # local iommu_grups 00:04:10.361 12:49:28 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:10.361 12:49:28 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:10.361 12:49:28 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:10.361 12:49:28 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:10.361 12:49:28 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:10.361 12:49:28 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:10.361 12:49:28 -- setup/driver.sh@32 -- # return 1 00:04:10.361 12:49:28 -- setup/driver.sh@38 -- # uio 00:04:10.361 12:49:28 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:10.361 12:49:28 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:10.361 12:49:28 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:10.361 12:49:28 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:10.361 12:49:28 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:04:10.361 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:10.361 12:49:28 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:10.361 12:49:28 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:10.361 12:49:28 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:10.361 Looking for driver=uio_pci_generic 00:04:10.361 12:49:28 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:10.361 12:49:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.361 12:49:28 -- setup/driver.sh@45 -- # setup output config 00:04:10.361 12:49:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.361 12:49:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:10.619 12:49:29 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:10.619 12:49:29 -- setup/driver.sh@58 -- # continue 00:04:10.619 12:49:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.619 12:49:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.619 12:49:29 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:10.619 12:49:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.995 12:49:30 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:11.995 12:49:30 -- setup/driver.sh@65 -- # setup reset 00:04:11.995 12:49:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.995 12:49:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.254 00:04:12.254 real 0m1.941s 00:04:12.254 user 0m0.434s 00:04:12.254 sys 0m1.485s 00:04:12.254 ************************************ 00:04:12.254 12:49:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.254 12:49:30 -- common/autotest_common.sh@10 -- # set +x 00:04:12.254 END TEST guess_driver 00:04:12.254 ************************************ 00:04:12.254 00:04:12.254 real 0m2.478s 00:04:12.254 user 0m0.717s 00:04:12.254 sys 0m1.758s 00:04:12.254 12:49:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.254 ************************************ 00:04:12.254 12:49:30 -- common/autotest_common.sh@10 -- # set +x 00:04:12.254 END TEST driver 00:04:12.254 ************************************ 00:04:12.254 12:49:30 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:12.254 12:49:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.254 12:49:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.254 12:49:30 -- common/autotest_common.sh@10 -- # set +x 00:04:12.254 ************************************ 00:04:12.254 START TEST devices 00:04:12.254 ************************************ 00:04:12.254 12:49:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:12.254 * Looking for test storage... 00:04:12.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:12.254 12:49:31 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:12.254 12:49:31 -- setup/devices.sh@192 -- # setup reset 00:04:12.254 12:49:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.254 12:49:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.822 12:49:31 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:12.822 12:49:31 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:12.822 12:49:31 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:12.822 12:49:31 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:12.822 12:49:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:12.822 12:49:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:12.822 12:49:31 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:12.822 12:49:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.822 12:49:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:12.822 12:49:31 -- setup/devices.sh@196 -- # blocks=() 00:04:12.822 12:49:31 -- setup/devices.sh@196 -- # declare -a blocks 00:04:12.822 12:49:31 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:12.822 12:49:31 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:12.822 12:49:31 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:12.822 12:49:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:12.822 12:49:31 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:12.822 12:49:31 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:12.822 12:49:31 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:12.822 12:49:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:12.822 12:49:31 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:12.822 12:49:31 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:12.822 12:49:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:12.822 No valid GPT data, bailing 00:04:12.822 12:49:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.822 12:49:31 -- scripts/common.sh@393 -- # pt= 00:04:12.822 12:49:31 -- scripts/common.sh@394 -- # return 1 00:04:12.822 12:49:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:12.822 12:49:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:12.822 12:49:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:12.822 12:49:31 -- setup/common.sh@80 -- # echo 5368709120 00:04:12.822 12:49:31 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:12.822 12:49:31 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:12.822 12:49:31 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:12.822 12:49:31 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:12.822 12:49:31 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:12.822 12:49:31 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:12.822 12:49:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.822 12:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.822 12:49:31 -- common/autotest_common.sh@10 -- # set +x 00:04:12.822 ************************************ 00:04:12.822 START TEST nvme_mount 00:04:12.822 ************************************ 00:04:12.822 12:49:31 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:12.822 12:49:31 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:12.822 12:49:31 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:12.822 12:49:31 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.822 12:49:31 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.822 12:49:31 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:12.822 12:49:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:12.822 12:49:31 -- setup/common.sh@40 -- # local part_no=1 00:04:12.822 12:49:31 -- setup/common.sh@41 -- # local size=1073741824 00:04:12.822 12:49:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:12.822 12:49:31 -- setup/common.sh@44 -- # parts=() 00:04:12.822 12:49:31 -- setup/common.sh@44 -- # local parts 00:04:12.822 12:49:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:12.822 12:49:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.822 12:49:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.822 12:49:31 -- setup/common.sh@46 -- # (( part++ )) 00:04:12.822 12:49:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.822 12:49:31 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:12.822 12:49:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:12.822 12:49:31 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:13.756 Creating new GPT entries in memory. 00:04:13.756 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.756 other utilities. 00:04:13.756 12:49:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.756 12:49:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.756 12:49:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.756 12:49:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.756 12:49:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:15.155 Creating new GPT entries in memory. 00:04:15.155 The operation has completed successfully. 00:04:15.155 12:49:33 -- setup/common.sh@57 -- # (( part++ )) 00:04:15.155 12:49:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.155 12:49:33 -- setup/common.sh@62 -- # wait 98277 00:04:15.155 12:49:33 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.155 12:49:33 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:15.155 12:49:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.155 12:49:33 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:15.155 12:49:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:15.155 12:49:33 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.155 12:49:33 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:15.155 12:49:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:15.155 12:49:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:15.155 12:49:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.155 12:49:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:15.155 12:49:33 -- setup/devices.sh@53 -- # local found=0 00:04:15.155 12:49:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.155 12:49:33 -- setup/devices.sh@56 -- # : 00:04:15.155 12:49:33 -- setup/devices.sh@59 -- # local pci status 00:04:15.155 12:49:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.155 12:49:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:15.155 12:49:33 -- setup/devices.sh@47 -- # setup output config 00:04:15.155 12:49:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.155 12:49:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.155 12:49:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:15.155 12:49:33 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:15.155 12:49:33 -- setup/devices.sh@63 -- # found=1 00:04:15.155 12:49:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.155 12:49:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:15.155 12:49:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.155 12:49:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:15.155 12:49:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.530 12:49:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.530 12:49:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:16.530 12:49:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.530 12:49:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.530 12:49:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:16.530 12:49:35 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:16.530 12:49:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.530 12:49:35 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.530 12:49:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.530 12:49:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:16.530 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.530 12:49:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.530 12:49:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.530 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:16.530 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:16.530 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:16.530 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:16.530 12:49:35 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:16.530 12:49:35 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:16.530 12:49:35 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.530 12:49:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:16.530 12:49:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:16.530 12:49:35 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.530 12:49:35 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:16.530 12:49:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:16.530 12:49:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:16.530 12:49:35 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:16.530 12:49:35 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:16.530 12:49:35 -- setup/devices.sh@53 -- # local found=0 00:04:16.530 12:49:35 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.530 12:49:35 -- setup/devices.sh@56 -- # : 00:04:16.530 12:49:35 -- setup/devices.sh@59 -- # local pci status 00:04:16.530 12:49:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:16.530 12:49:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.530 12:49:35 -- setup/devices.sh@47 -- # setup output config 00:04:16.530 12:49:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.530 12:49:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:16.530 12:49:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:16.530 12:49:35 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:16.530 12:49:35 -- setup/devices.sh@63 -- # found=1 00:04:16.530 12:49:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.530 12:49:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:16.530 12:49:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.788 12:49:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:16.788 12:49:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.723 12:49:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.723 12:49:36 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:17.723 12:49:36 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.723 12:49:36 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.723 12:49:36 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:17.723 12:49:36 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.723 12:49:36 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:17.723 12:49:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:17.723 12:49:36 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:17.723 12:49:36 -- setup/devices.sh@50 -- # local mount_point= 00:04:17.723 12:49:36 -- setup/devices.sh@51 -- # local test_file= 00:04:17.723 12:49:36 -- setup/devices.sh@53 -- # local found=0 00:04:17.723 12:49:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:17.723 12:49:36 -- setup/devices.sh@59 -- # local pci status 00:04:17.723 12:49:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.723 12:49:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:17.723 12:49:36 -- setup/devices.sh@47 -- # setup output config 00:04:17.723 12:49:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.723 12:49:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.982 12:49:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:17.982 12:49:36 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:17.982 12:49:36 -- setup/devices.sh@63 -- # found=1 00:04:17.982 12:49:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.982 12:49:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:17.982 12:49:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.241 12:49:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.241 12:49:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.176 12:49:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.176 12:49:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:19.176 12:49:37 -- setup/devices.sh@68 -- # return 0 00:04:19.176 12:49:37 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:19.176 12:49:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.176 12:49:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.176 12:49:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.176 12:49:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:19.176 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.176 00:04:19.176 real 0m6.453s 00:04:19.176 user 0m0.682s 00:04:19.176 sys 0m3.639s 00:04:19.176 12:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.176 12:49:37 -- common/autotest_common.sh@10 -- # set +x 00:04:19.176 ************************************ 00:04:19.176 END TEST nvme_mount 00:04:19.177 ************************************ 00:04:19.177 12:49:37 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:19.177 12:49:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.177 12:49:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.177 12:49:37 -- common/autotest_common.sh@10 -- # set +x 00:04:19.435 ************************************ 00:04:19.435 START TEST dm_mount 00:04:19.435 ************************************ 00:04:19.435 12:49:38 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:19.435 12:49:38 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:19.435 12:49:38 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:19.435 12:49:38 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:19.435 12:49:38 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:19.435 12:49:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.435 12:49:38 -- setup/common.sh@40 -- # local part_no=2 00:04:19.435 12:49:38 -- setup/common.sh@41 -- # local size=1073741824 00:04:19.435 12:49:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.435 12:49:38 -- setup/common.sh@44 -- # parts=() 00:04:19.435 12:49:38 -- setup/common.sh@44 -- # local parts 00:04:19.435 12:49:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.435 12:49:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.435 12:49:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.435 12:49:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:19.435 12:49:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.435 12:49:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.435 12:49:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:19.435 12:49:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.435 12:49:38 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:19.435 12:49:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.435 12:49:38 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:20.369 Creating new GPT entries in memory. 00:04:20.369 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.369 other utilities. 00:04:20.369 12:49:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.369 12:49:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.369 12:49:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.369 12:49:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.369 12:49:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:21.303 Creating new GPT entries in memory. 00:04:21.303 The operation has completed successfully. 00:04:21.303 12:49:40 -- setup/common.sh@57 -- # (( part++ )) 00:04:21.303 12:49:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.303 12:49:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.303 12:49:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.303 12:49:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:22.678 The operation has completed successfully. 00:04:22.678 12:49:41 -- setup/common.sh@57 -- # (( part++ )) 00:04:22.678 12:49:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.678 12:49:41 -- setup/common.sh@62 -- # wait 98758 00:04:22.678 12:49:41 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:22.678 12:49:41 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.678 12:49:41 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.678 12:49:41 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:22.678 12:49:41 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:22.678 12:49:41 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.678 12:49:41 -- setup/devices.sh@161 -- # break 00:04:22.678 12:49:41 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.678 12:49:41 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:22.678 12:49:41 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:22.678 12:49:41 -- setup/devices.sh@166 -- # dm=dm-0 00:04:22.678 12:49:41 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:22.678 12:49:41 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:22.678 12:49:41 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.678 12:49:41 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:22.678 12:49:41 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.678 12:49:41 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:22.678 12:49:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:22.678 12:49:41 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.678 12:49:41 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.678 12:49:41 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:22.678 12:49:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:22.678 12:49:41 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:22.678 12:49:41 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:22.678 12:49:41 -- setup/devices.sh@53 -- # local found=0 00:04:22.678 12:49:41 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.678 12:49:41 -- setup/devices.sh@56 -- # : 00:04:22.678 12:49:41 -- setup/devices.sh@59 -- # local pci status 00:04:22.678 12:49:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.678 12:49:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:22.678 12:49:41 -- setup/devices.sh@47 -- # setup output config 00:04:22.678 12:49:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.678 12:49:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.678 12:49:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.678 12:49:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.678 12:49:41 -- setup/devices.sh@63 -- # found=1 00:04:22.678 12:49:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.678 12:49:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.678 12:49:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.937 12:49:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.937 12:49:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.870 12:49:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.871 12:49:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:23.871 12:49:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.871 12:49:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:23.871 12:49:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:23.871 12:49:42 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:23.871 12:49:42 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:23.871 12:49:42 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:23.871 12:49:42 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:23.871 12:49:42 -- setup/devices.sh@50 -- # local mount_point= 00:04:23.871 12:49:42 -- setup/devices.sh@51 -- # local test_file= 00:04:23.871 12:49:42 -- setup/devices.sh@53 -- # local found=0 00:04:23.871 12:49:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.871 12:49:42 -- setup/devices.sh@59 -- # local pci status 00:04:23.871 12:49:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.871 12:49:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:23.871 12:49:42 -- setup/devices.sh@47 -- # setup output config 00:04:23.871 12:49:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.871 12:49:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.129 12:49:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:24.129 12:49:42 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:24.129 12:49:42 -- setup/devices.sh@63 -- # found=1 00:04:24.129 12:49:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.129 12:49:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:24.129 12:49:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.129 12:49:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:24.129 12:49:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.505 12:49:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.505 12:49:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.505 12:49:43 -- setup/devices.sh@68 -- # return 0 00:04:25.505 12:49:43 -- setup/devices.sh@187 -- # cleanup_dm 00:04:25.505 12:49:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.505 12:49:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.505 12:49:43 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:25.505 12:49:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.505 12:49:44 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:25.505 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.505 12:49:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.505 12:49:44 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:25.505 00:04:25.505 real 0m6.068s 00:04:25.505 user 0m0.440s 00:04:25.505 sys 0m2.374s 00:04:25.505 12:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.505 ************************************ 00:04:25.505 END TEST dm_mount 00:04:25.505 ************************************ 00:04:25.505 12:49:44 -- common/autotest_common.sh@10 -- # set +x 00:04:25.505 12:49:44 -- setup/devices.sh@1 -- # cleanup 00:04:25.505 12:49:44 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:25.505 12:49:44 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.505 12:49:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.505 12:49:44 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.505 12:49:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.505 12:49:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.505 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:25.505 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:25.505 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.505 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.505 12:49:44 -- setup/devices.sh@12 -- # cleanup_dm 00:04:25.505 12:49:44 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.505 12:49:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.505 12:49:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.505 12:49:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.505 12:49:44 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.505 12:49:44 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:25.505 ************************************ 00:04:25.505 END TEST devices 00:04:25.505 ************************************ 00:04:25.505 00:04:25.505 real 0m13.247s 00:04:25.505 user 0m1.496s 00:04:25.505 sys 0m6.303s 00:04:25.505 12:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.505 12:49:44 -- common/autotest_common.sh@10 -- # set +x 00:04:25.505 00:04:25.505 real 0m26.680s 00:04:25.505 user 0m5.933s 00:04:25.505 sys 0m15.445s 00:04:25.505 12:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.505 ************************************ 00:04:25.505 END TEST setup.sh 00:04:25.505 ************************************ 00:04:25.505 12:49:44 -- common/autotest_common.sh@10 -- # set +x 00:04:25.505 12:49:44 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.764 Hugepages 00:04:25.764 node hugesize free / total 00:04:25.764 node0 1048576kB 0 / 0 00:04:25.764 node0 2048kB 2048 / 2048 00:04:25.764 00:04:25.764 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.764 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.764 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:25.764 12:49:44 -- spdk/autotest.sh@141 -- # uname -s 00:04:26.021 12:49:44 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:26.021 12:49:44 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:26.021 12:49:44 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:26.279 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.653 12:49:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:28.648 12:49:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:28.648 12:49:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:28.648 12:49:47 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.648 12:49:47 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:28.648 12:49:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:28.648 12:49:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:28.648 12:49:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.648 12:49:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.648 12:49:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:28.648 12:49:47 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:28.648 12:49:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:28.648 12:49:47 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:28.906 Waiting for block devices as requested 00:04:28.906 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.906 12:49:47 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:28.906 12:49:47 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:28.906 12:49:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:28.906 12:49:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:28.906 12:49:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:28.906 12:49:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:28.906 12:49:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:28.906 12:49:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:28.906 12:49:47 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:28.906 12:49:47 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:28.906 12:49:47 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:28.906 12:49:47 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:28.906 12:49:47 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:28.906 12:49:47 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:28.906 12:49:47 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:28.906 12:49:47 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:28.906 12:49:47 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:28.906 12:49:47 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:28.906 12:49:47 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:28.906 12:49:47 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:28.906 12:49:47 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:28.906 12:49:47 -- common/autotest_common.sh@1542 -- # continue 00:04:28.906 12:49:47 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:28.906 12:49:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:28.906 12:49:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.906 12:49:47 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:28.906 12:49:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:28.906 12:49:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.906 12:49:47 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:29.473 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.850 12:49:49 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:30.850 12:49:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:30.850 12:49:49 -- common/autotest_common.sh@10 -- # set +x 00:04:30.850 12:49:49 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:30.850 12:49:49 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:30.850 12:49:49 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.850 12:49:49 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:30.850 12:49:49 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:30.850 12:49:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:30.850 12:49:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:30.850 12:49:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:30.850 12:49:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.850 12:49:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:30.850 12:49:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:30.850 12:49:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:30.850 12:49:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:30.850 12:49:49 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:30.850 12:49:49 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:30.850 12:49:49 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:30.850 12:49:49 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.850 12:49:49 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:30.850 12:49:49 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:30.850 12:49:49 -- common/autotest_common.sh@1578 -- # return 0 00:04:30.850 12:49:49 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:04:30.850 12:49:49 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:30.850 12:49:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.850 12:49:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.850 12:49:49 -- common/autotest_common.sh@10 -- # set +x 00:04:30.850 ************************************ 00:04:30.850 START TEST unittest 00:04:30.850 ************************************ 00:04:30.850 12:49:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:30.850 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:30.850 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:30.850 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:30.850 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:30.850 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:30.850 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:30.850 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:30.850 ++ rpc_py=rpc_cmd 00:04:30.850 ++ set -e 00:04:30.850 ++ shopt -s nullglob 00:04:30.850 ++ shopt -s extglob 00:04:30.850 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:30.850 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:30.850 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:30.850 +++ CONFIG_FIO_PLUGIN=y 00:04:30.850 +++ CONFIG_NVME_CUSE=y 00:04:30.850 +++ CONFIG_RAID5F=y 00:04:30.850 +++ CONFIG_LTO=n 00:04:30.850 +++ CONFIG_SMA=n 00:04:30.850 +++ CONFIG_ISAL=y 00:04:30.850 +++ CONFIG_OPENSSL_PATH= 00:04:30.850 +++ CONFIG_IDXD_KERNEL=n 00:04:30.850 +++ CONFIG_URING_PATH= 00:04:30.850 +++ CONFIG_DAOS=n 00:04:30.850 +++ CONFIG_DPDK_LIB_DIR= 00:04:30.850 +++ CONFIG_OCF=n 00:04:30.850 +++ CONFIG_EXAMPLES=y 00:04:30.850 +++ CONFIG_RDMA_PROV=verbs 00:04:30.850 +++ CONFIG_ISCSI_INITIATOR=y 00:04:30.850 +++ CONFIG_VTUNE=n 00:04:30.850 +++ CONFIG_DPDK_INC_DIR= 00:04:30.850 +++ CONFIG_CET=n 00:04:30.850 +++ CONFIG_TESTS=y 00:04:30.850 +++ CONFIG_APPS=y 00:04:30.850 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:30.850 +++ CONFIG_DAOS_DIR= 00:04:30.850 +++ CONFIG_CRYPTO_MLX5=n 00:04:30.850 +++ CONFIG_XNVME=n 00:04:30.850 +++ CONFIG_UNIT_TESTS=y 00:04:30.850 +++ CONFIG_FUSE=n 00:04:30.850 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:30.850 +++ CONFIG_OCF_PATH= 00:04:30.850 +++ CONFIG_WPDK_DIR= 00:04:30.850 +++ CONFIG_VFIO_USER=n 00:04:30.850 +++ CONFIG_MAX_LCORES= 00:04:30.850 +++ CONFIG_ARCH=native 00:04:30.850 +++ CONFIG_TSAN=n 00:04:30.850 +++ CONFIG_VIRTIO=y 00:04:30.850 +++ CONFIG_IPSEC_MB=n 00:04:30.850 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:30.850 +++ CONFIG_ASAN=y 00:04:30.850 +++ CONFIG_SHARED=n 00:04:30.850 +++ CONFIG_VTUNE_DIR= 00:04:30.850 +++ CONFIG_RDMA_SET_TOS=y 00:04:30.850 +++ CONFIG_VBDEV_COMPRESS=n 00:04:30.850 +++ CONFIG_VFIO_USER_DIR= 00:04:30.850 +++ CONFIG_FUZZER_LIB= 00:04:30.850 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:30.850 +++ CONFIG_USDT=n 00:04:30.850 +++ CONFIG_URING_ZNS=n 00:04:30.850 +++ CONFIG_FC_PATH= 00:04:30.850 +++ CONFIG_COVERAGE=y 00:04:30.850 +++ CONFIG_CUSTOMOCF=n 00:04:30.850 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:30.850 +++ CONFIG_WERROR=y 00:04:30.850 +++ CONFIG_DEBUG=y 00:04:30.850 +++ CONFIG_RDMA=y 00:04:30.850 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:30.850 +++ CONFIG_FUZZER=n 00:04:30.850 +++ CONFIG_FC=n 00:04:30.850 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:30.850 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:30.850 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:30.851 +++ CONFIG_CROSS_PREFIX= 00:04:30.851 +++ CONFIG_PREFIX=/usr/local 00:04:30.851 +++ CONFIG_HAVE_LIBBSD=n 00:04:30.851 +++ CONFIG_UBSAN=y 00:04:30.851 +++ CONFIG_PGO_CAPTURE=n 00:04:30.851 +++ CONFIG_UBLK=n 00:04:30.851 +++ CONFIG_ISAL_CRYPTO=y 00:04:30.851 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:30.851 +++ CONFIG_CRYPTO=n 00:04:30.851 +++ CONFIG_RBD=n 00:04:30.851 +++ CONFIG_LIBDIR= 00:04:30.851 +++ CONFIG_IPSEC_MB_DIR= 00:04:30.851 +++ CONFIG_PGO_USE=n 00:04:30.851 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:30.851 +++ CONFIG_GOLANG=n 00:04:30.851 +++ CONFIG_VHOST=y 00:04:30.851 +++ CONFIG_IDXD=y 00:04:30.851 +++ CONFIG_AVAHI=n 00:04:30.851 +++ CONFIG_URING=n 00:04:30.851 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:30.851 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:30.851 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:30.851 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:30.851 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:30.851 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:30.851 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:30.851 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:30.851 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:30.851 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:30.851 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:30.851 +++ VHOST_APP=("$_app_dir/vhost") 00:04:30.851 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:30.851 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:30.851 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:30.851 +++ [[ #ifndef SPDK_CONFIG_H 00:04:30.851 #define SPDK_CONFIG_H 00:04:30.851 #define SPDK_CONFIG_APPS 1 00:04:30.851 #define SPDK_CONFIG_ARCH native 00:04:30.851 #define SPDK_CONFIG_ASAN 1 00:04:30.851 #undef SPDK_CONFIG_AVAHI 00:04:30.851 #undef SPDK_CONFIG_CET 00:04:30.851 #define SPDK_CONFIG_COVERAGE 1 00:04:30.851 #define SPDK_CONFIG_CROSS_PREFIX 00:04:30.851 #undef SPDK_CONFIG_CRYPTO 00:04:30.851 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:30.851 #undef SPDK_CONFIG_CUSTOMOCF 00:04:30.851 #undef SPDK_CONFIG_DAOS 00:04:30.851 #define SPDK_CONFIG_DAOS_DIR 00:04:30.851 #define SPDK_CONFIG_DEBUG 1 00:04:30.851 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:30.851 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:30.851 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:30.851 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:30.851 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:30.851 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:30.851 #define SPDK_CONFIG_EXAMPLES 1 00:04:30.851 #undef SPDK_CONFIG_FC 00:04:30.851 #define SPDK_CONFIG_FC_PATH 00:04:30.851 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:30.851 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:30.851 #undef SPDK_CONFIG_FUSE 00:04:30.851 #undef SPDK_CONFIG_FUZZER 00:04:30.851 #define SPDK_CONFIG_FUZZER_LIB 00:04:30.851 #undef SPDK_CONFIG_GOLANG 00:04:30.851 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:30.851 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:30.851 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:30.851 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:30.851 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:30.851 #define SPDK_CONFIG_IDXD 1 00:04:30.851 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:30.851 #undef SPDK_CONFIG_IPSEC_MB 00:04:30.851 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:30.851 #define SPDK_CONFIG_ISAL 1 00:04:30.851 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:30.851 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:30.851 #define SPDK_CONFIG_LIBDIR 00:04:30.851 #undef SPDK_CONFIG_LTO 00:04:30.851 #define SPDK_CONFIG_MAX_LCORES 00:04:30.851 #define SPDK_CONFIG_NVME_CUSE 1 00:04:30.851 #undef SPDK_CONFIG_OCF 00:04:30.851 #define SPDK_CONFIG_OCF_PATH 00:04:30.851 #define SPDK_CONFIG_OPENSSL_PATH 00:04:30.851 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:30.851 #undef SPDK_CONFIG_PGO_USE 00:04:30.851 #define SPDK_CONFIG_PREFIX /usr/local 00:04:30.851 #define SPDK_CONFIG_RAID5F 1 00:04:30.851 #undef SPDK_CONFIG_RBD 00:04:30.851 #define SPDK_CONFIG_RDMA 1 00:04:30.851 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:30.851 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:30.851 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:30.851 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:30.851 #undef SPDK_CONFIG_SHARED 00:04:30.851 #undef SPDK_CONFIG_SMA 00:04:30.851 #define SPDK_CONFIG_TESTS 1 00:04:30.851 #undef SPDK_CONFIG_TSAN 00:04:30.851 #undef SPDK_CONFIG_UBLK 00:04:30.851 #define SPDK_CONFIG_UBSAN 1 00:04:30.851 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:30.851 #undef SPDK_CONFIG_URING 00:04:30.851 #define SPDK_CONFIG_URING_PATH 00:04:30.851 #undef SPDK_CONFIG_URING_ZNS 00:04:30.851 #undef SPDK_CONFIG_USDT 00:04:30.851 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:30.851 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:30.851 #undef SPDK_CONFIG_VFIO_USER 00:04:30.851 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:30.851 #define SPDK_CONFIG_VHOST 1 00:04:30.851 #define SPDK_CONFIG_VIRTIO 1 00:04:30.851 #undef SPDK_CONFIG_VTUNE 00:04:30.851 #define SPDK_CONFIG_VTUNE_DIR 00:04:30.851 #define SPDK_CONFIG_WERROR 1 00:04:30.851 #define SPDK_CONFIG_WPDK_DIR 00:04:30.851 #undef SPDK_CONFIG_XNVME 00:04:30.851 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:30.851 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:30.851 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.851 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:30.851 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.851 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.851 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:30.851 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:30.851 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:30.851 ++++ export PATH 00:04:30.851 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:30.851 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:30.851 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:30.851 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:30.851 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:30.851 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:30.851 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:30.851 +++ TEST_TAG=N/A 00:04:30.851 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:30.851 ++ : 1 00:04:30.851 ++ export RUN_NIGHTLY 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_RUN_VALGRIND 00:04:30.851 ++ : 1 00:04:30.851 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:30.851 ++ : 1 00:04:30.851 ++ export SPDK_TEST_UNITTEST 00:04:30.851 ++ : 00:04:30.851 ++ export SPDK_TEST_AUTOBUILD 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_RELEASE_BUILD 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_ISAL 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_ISCSI 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:30.851 ++ : 1 00:04:30.851 ++ export SPDK_TEST_NVME 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_NVME_PMR 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_NVME_BP 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_NVME_CLI 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_NVME_CUSE 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_NVME_FDP 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_NVMF 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_VFIOUSER 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_FUZZER 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_FUZZER_SHORT 00:04:30.851 ++ : rdma 00:04:30.851 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_RBD 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_VHOST 00:04:30.851 ++ : 1 00:04:30.851 ++ export SPDK_TEST_BLOCKDEV 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_IOAT 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_BLOBFS 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_VHOST_INIT 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_LVOL 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:30.851 ++ : 1 00:04:30.851 ++ export SPDK_RUN_ASAN 00:04:30.851 ++ : 1 00:04:30.851 ++ export SPDK_RUN_UBSAN 00:04:30.851 ++ : 00:04:30.851 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_RUN_NON_ROOT 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_CRYPTO 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_FTL 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_OCF 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_VMD 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_OPAL 00:04:30.851 ++ : 00:04:30.851 ++ export SPDK_TEST_NATIVE_DPDK 00:04:30.851 ++ : true 00:04:30.851 ++ export SPDK_AUTOTEST_X 00:04:30.851 ++ : 1 00:04:30.851 ++ export SPDK_TEST_RAID5 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_URING 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_USDT 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_USE_IGB_UIO 00:04:30.851 ++ : 0 00:04:30.851 ++ export SPDK_TEST_SCHEDULER 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_TEST_SCANBUILD 00:04:30.852 ++ : 00:04:30.852 ++ export SPDK_TEST_NVMF_NICS 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_TEST_SMA 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_TEST_DAOS 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_TEST_XNVME 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_TEST_ACCEL_DSA 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_TEST_ACCEL_IAA 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_TEST_ACCEL_IOAT 00:04:30.852 ++ : 00:04:30.852 ++ export SPDK_TEST_FUZZER_TARGET 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_TEST_NVMF_MDNS 00:04:30.852 ++ : 0 00:04:30.852 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:30.852 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:30.852 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:30.852 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:30.852 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:30.852 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:30.852 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:30.852 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:30.852 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:30.852 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:30.852 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:30.852 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:30.852 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:30.852 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:30.852 ++ PYTHONDONTWRITEBYTECODE=1 00:04:30.852 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:30.852 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:30.852 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:30.852 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:30.852 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:30.852 ++ rm -rf /var/tmp/asan_suppression_file 00:04:30.852 ++ cat 00:04:30.852 ++ echo leak:libfuse3.so 00:04:30.852 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:30.852 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:30.852 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:30.852 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:30.852 ++ '[' -z /var/spdk/dependencies ']' 00:04:30.852 ++ export DEPENDENCY_DIR 00:04:30.852 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:30.852 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:30.852 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:30.852 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:30.852 ++ export QEMU_BIN= 00:04:30.852 ++ QEMU_BIN= 00:04:30.852 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:30.852 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:30.852 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:30.852 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:30.852 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:30.852 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:30.852 ++ '[' 0 -eq 0 ']' 00:04:30.852 ++ export valgrind= 00:04:30.852 ++ valgrind= 00:04:30.852 +++ uname -s 00:04:30.852 ++ '[' Linux = Linux ']' 00:04:30.852 ++ HUGEMEM=4096 00:04:30.852 ++ export CLEAR_HUGE=yes 00:04:30.852 ++ CLEAR_HUGE=yes 00:04:30.852 ++ [[ 0 -eq 1 ]] 00:04:30.852 ++ [[ 0 -eq 1 ]] 00:04:30.852 ++ MAKE=make 00:04:30.852 +++ nproc 00:04:30.852 ++ MAKEFLAGS=-j10 00:04:30.852 ++ export HUGEMEM=4096 00:04:30.852 ++ HUGEMEM=4096 00:04:30.852 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:30.852 ++ NO_HUGE=() 00:04:30.852 ++ TEST_MODE= 00:04:30.852 ++ [[ -z '' ]] 00:04:30.852 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:30.852 ++ exec 00:04:30.852 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:30.852 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:30.852 ++ set_test_storage 2147483648 00:04:30.852 ++ [[ -v testdir ]] 00:04:30.852 ++ local requested_size=2147483648 00:04:30.852 ++ local mount target_dir 00:04:30.852 ++ local -A mounts fss sizes avails uses 00:04:30.852 ++ local source fs size avail mount use 00:04:30.852 ++ local storage_fallback storage_candidates 00:04:30.852 +++ mktemp -udt spdk.XXXXXX 00:04:30.852 ++ storage_fallback=/tmp/spdk.8BcMw8 00:04:30.852 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:30.852 ++ [[ -n '' ]] 00:04:30.852 ++ [[ -n '' ]] 00:04:30.852 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.8BcMw8/tests/unit /tmp/spdk.8BcMw8 00:04:30.852 ++ requested_size=2214592512 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 +++ df -T 00:04:30.852 +++ grep -v Filesystem 00:04:30.852 ++ mounts["$mount"]=udev 00:04:30.852 ++ fss["$mount"]=devtmpfs 00:04:30.852 ++ avails["$mount"]=6224461824 00:04:30.852 ++ sizes["$mount"]=6224461824 00:04:30.852 ++ uses["$mount"]=0 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=tmpfs 00:04:30.852 ++ fss["$mount"]=tmpfs 00:04:30.852 ++ avails["$mount"]=1253408768 00:04:30.852 ++ sizes["$mount"]=1254514688 00:04:30.852 ++ uses["$mount"]=1105920 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=/dev/vda1 00:04:30.852 ++ fss["$mount"]=ext4 00:04:30.852 ++ avails["$mount"]=10733649920 00:04:30.852 ++ sizes["$mount"]=20616794112 00:04:30.852 ++ uses["$mount"]=9866366976 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=tmpfs 00:04:30.852 ++ fss["$mount"]=tmpfs 00:04:30.852 ++ avails["$mount"]=6272561152 00:04:30.852 ++ sizes["$mount"]=6272561152 00:04:30.852 ++ uses["$mount"]=0 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=tmpfs 00:04:30.852 ++ fss["$mount"]=tmpfs 00:04:30.852 ++ avails["$mount"]=5242880 00:04:30.852 ++ sizes["$mount"]=5242880 00:04:30.852 ++ uses["$mount"]=0 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=tmpfs 00:04:30.852 ++ fss["$mount"]=tmpfs 00:04:30.852 ++ avails["$mount"]=6272561152 00:04:30.852 ++ sizes["$mount"]=6272561152 00:04:30.852 ++ uses["$mount"]=0 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=/dev/loop0 00:04:30.852 ++ fss["$mount"]=squashfs 00:04:30.852 ++ avails["$mount"]=0 00:04:30.852 ++ sizes["$mount"]=67108864 00:04:30.852 ++ uses["$mount"]=67108864 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=/dev/vda15 00:04:30.852 ++ fss["$mount"]=vfat 00:04:30.852 ++ avails["$mount"]=103089152 00:04:30.852 ++ sizes["$mount"]=109422592 00:04:30.852 ++ uses["$mount"]=6334464 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=/dev/loop2 00:04:30.852 ++ fss["$mount"]=squashfs 00:04:30.852 ++ avails["$mount"]=0 00:04:30.852 ++ sizes["$mount"]=41025536 00:04:30.852 ++ uses["$mount"]=41025536 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=/dev/loop1 00:04:30.852 ++ fss["$mount"]=squashfs 00:04:30.852 ++ avails["$mount"]=0 00:04:30.852 ++ sizes["$mount"]=96337920 00:04:30.852 ++ uses["$mount"]=96337920 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=tmpfs 00:04:30.852 ++ fss["$mount"]=tmpfs 00:04:30.852 ++ avails["$mount"]=1254510592 00:04:30.852 ++ sizes["$mount"]=1254510592 00:04:30.852 ++ uses["$mount"]=0 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:04:30.852 ++ fss["$mount"]=fuse.sshfs 00:04:30.852 ++ avails["$mount"]=95789756416 00:04:30.852 ++ sizes["$mount"]=105088212992 00:04:30.852 ++ uses["$mount"]=3913023488 00:04:30.852 ++ read -r source fs size use avail _ mount 00:04:30.852 ++ printf '* Looking for test storage...\n' 00:04:30.852 * Looking for test storage... 00:04:30.852 ++ local target_space new_size 00:04:30.852 ++ for target_dir in "${storage_candidates[@]}" 00:04:30.852 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:30.852 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:30.852 ++ mount=/ 00:04:30.852 ++ target_space=10733649920 00:04:30.852 ++ (( target_space == 0 || target_space < requested_size )) 00:04:30.852 ++ (( target_space >= requested_size )) 00:04:30.852 ++ [[ ext4 == tmpfs ]] 00:04:30.852 ++ [[ ext4 == ramfs ]] 00:04:30.852 ++ [[ / == / ]] 00:04:30.852 ++ new_size=12080959488 00:04:30.852 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:30.852 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:30.852 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:30.852 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:30.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:30.852 ++ return 0 00:04:30.852 ++ set -o errtrace 00:04:30.852 ++ shopt -s extdebug 00:04:30.852 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:30.852 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:30.852 12:49:49 -- common/autotest_common.sh@1672 -- # true 00:04:30.852 12:49:49 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:04:30.852 12:49:49 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:30.852 12:49:49 -- common/autotest_common.sh@29 -- # exec 00:04:30.853 12:49:49 -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:30.853 12:49:49 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:30.853 12:49:49 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:30.853 12:49:49 -- common/autotest_common.sh@18 -- # set -x 00:04:30.853 12:49:49 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:30.853 12:49:49 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:30.853 12:49:49 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:30.853 12:49:49 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:30.853 12:49:49 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:30.853 12:49:49 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:04:30.853 12:49:49 -- unit/unittest.sh@179 -- # hash lcov 00:04:30.853 12:49:49 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:30.853 12:49:49 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:30.853 12:49:49 -- unit/unittest.sh@180 -- # cov_avail=yes 00:04:30.853 12:49:49 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:04:30.853 12:49:49 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:30.853 12:49:49 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:30.853 12:49:49 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:30.853 12:49:49 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:04:30.853 --rc lcov_branch_coverage=1 00:04:30.853 --rc lcov_function_coverage=1 00:04:30.853 --rc genhtml_branch_coverage=1 00:04:30.853 --rc genhtml_function_coverage=1 00:04:30.853 --rc genhtml_legend=1 00:04:30.853 --rc geninfo_all_blocks=1 00:04:30.853 ' 00:04:30.853 12:49:49 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:04:30.853 --rc lcov_branch_coverage=1 00:04:30.853 --rc lcov_function_coverage=1 00:04:30.853 --rc genhtml_branch_coverage=1 00:04:30.853 --rc genhtml_function_coverage=1 00:04:30.853 --rc genhtml_legend=1 00:04:30.853 --rc geninfo_all_blocks=1 00:04:30.853 ' 00:04:30.853 12:49:49 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:04:30.853 --rc lcov_branch_coverage=1 00:04:30.853 --rc lcov_function_coverage=1 00:04:30.853 --rc genhtml_branch_coverage=1 00:04:30.853 --rc genhtml_function_coverage=1 00:04:30.853 --rc genhtml_legend=1 00:04:30.853 --rc geninfo_all_blocks=1 00:04:30.853 --no-external' 00:04:30.853 12:49:49 -- unit/unittest.sh@200 -- # LCOV='lcov 00:04:30.853 --rc lcov_branch_coverage=1 00:04:30.853 --rc lcov_function_coverage=1 00:04:30.853 --rc genhtml_branch_coverage=1 00:04:30.853 --rc genhtml_function_coverage=1 00:04:30.853 --rc genhtml_legend=1 00:04:30.853 --rc geninfo_all_blocks=1 00:04:30.853 --no-external' 00:04:30.853 12:49:49 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:32.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:32.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:32.758 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:32.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:33.017 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:33.017 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:19.684 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:19.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:19.684 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:19.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:19.684 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:19.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:19.684 12:50:38 -- unit/unittest.sh@206 -- # uname -m 00:05:19.684 12:50:38 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:19.684 12:50:38 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:19.684 12:50:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.684 12:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.684 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.684 ************************************ 00:05:19.684 START TEST unittest_pci_event 00:05:19.684 ************************************ 00:05:19.684 12:50:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:19.684 00:05:19.684 00:05:19.684 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.684 http://cunit.sourceforge.net/ 00:05:19.684 00:05:19.684 00:05:19.684 Suite: pci_event 00:05:19.684 Test: test_pci_parse_event ...[2024-06-11 12:50:38.160391] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:19.684 [2024-06-11 12:50:38.160856] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:19.684 passed 00:05:19.684 00:05:19.684 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.684 suites 1 1 n/a 0 0 00:05:19.684 tests 1 1 1 0 0 00:05:19.684 asserts 15 15 15 0 n/a 00:05:19.684 00:05:19.684 Elapsed time = 0.001 seconds 00:05:19.684 00:05:19.684 real 0m0.033s 00:05:19.684 user 0m0.006s 00:05:19.684 sys 0m0.024s 00:05:19.684 12:50:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.684 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.684 ************************************ 00:05:19.684 END TEST unittest_pci_event 00:05:19.684 ************************************ 00:05:19.684 12:50:38 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:19.684 12:50:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.684 12:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.684 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.684 ************************************ 00:05:19.684 START TEST unittest_include 00:05:19.684 ************************************ 00:05:19.684 12:50:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:19.684 00:05:19.684 00:05:19.684 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.684 http://cunit.sourceforge.net/ 00:05:19.685 00:05:19.685 00:05:19.685 Suite: histogram 00:05:19.685 Test: histogram_test ...passed 00:05:19.685 Test: histogram_merge ...passed 00:05:19.685 00:05:19.685 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.685 suites 1 1 n/a 0 0 00:05:19.685 tests 2 2 2 0 0 00:05:19.685 asserts 50 50 50 0 n/a 00:05:19.685 00:05:19.685 Elapsed time = 0.006 seconds 00:05:19.685 00:05:19.685 real 0m0.033s 00:05:19.685 user 0m0.029s 00:05:19.685 sys 0m0.004s 00:05:19.685 12:50:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.685 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.685 ************************************ 00:05:19.685 END TEST unittest_include 00:05:19.685 ************************************ 00:05:19.685 12:50:38 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:19.685 12:50:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.685 12:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.685 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:19.685 ************************************ 00:05:19.685 START TEST unittest_bdev 00:05:19.685 ************************************ 00:05:19.685 12:50:38 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:19.685 12:50:38 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:19.685 00:05:19.685 00:05:19.685 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.685 http://cunit.sourceforge.net/ 00:05:19.685 00:05:19.685 00:05:19.685 Suite: bdev 00:05:19.685 Test: bytes_to_blocks_test ...passed 00:05:19.685 Test: num_blocks_test ...passed 00:05:19.685 Test: io_valid_test ...passed 00:05:19.685 Test: open_write_test ...[2024-06-11 12:50:38.380495] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:19.685 [2024-06-11 12:50:38.380788] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:19.685 [2024-06-11 12:50:38.380902] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:19.685 passed 00:05:19.685 Test: claim_test ...passed 00:05:19.685 Test: alias_add_del_test ...[2024-06-11 12:50:38.466986] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:19.685 [2024-06-11 12:50:38.467106] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:19.685 [2024-06-11 12:50:38.467166] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:19.685 passed 00:05:19.685 Test: get_device_stat_test ...passed 00:05:19.943 Test: bdev_io_types_test ...passed 00:05:19.943 Test: bdev_io_wait_test ...passed 00:05:19.943 Test: bdev_io_spans_split_test ...passed 00:05:19.943 Test: bdev_io_boundary_split_test ...passed 00:05:19.943 Test: bdev_io_max_size_and_segment_split_test ...[2024-06-11 12:50:38.638348] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:19.943 passed 00:05:19.943 Test: bdev_io_mix_split_test ...passed 00:05:19.943 Test: bdev_io_split_with_io_wait ...passed 00:05:19.943 Test: bdev_io_write_unit_split_test ...[2024-06-11 12:50:38.763419] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:19.943 [2024-06-11 12:50:38.763526] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:19.943 [2024-06-11 12:50:38.763560] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:19.943 [2024-06-11 12:50:38.763608] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:20.202 passed 00:05:20.202 Test: bdev_io_alignment_with_boundary ...passed 00:05:20.202 Test: bdev_io_alignment ...passed 00:05:20.202 Test: bdev_histograms ...passed 00:05:20.202 Test: bdev_write_zeroes ...passed 00:05:20.202 Test: bdev_compare_and_write ...passed 00:05:20.460 Test: bdev_compare ...passed 00:05:20.460 Test: bdev_compare_emulated ...passed 00:05:20.460 Test: bdev_zcopy_write ...passed 00:05:20.460 Test: bdev_zcopy_read ...passed 00:05:20.460 Test: bdev_open_while_hotremove ...passed 00:05:20.460 Test: bdev_close_while_hotremove ...passed 00:05:20.460 Test: bdev_open_ext_test ...[2024-06-11 12:50:39.240306] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:20.460 passed 00:05:20.460 Test: bdev_open_ext_unregister ...passed 00:05:20.460 Test: bdev_set_io_timeout ...[2024-06-11 12:50:39.240532] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:20.460 passed 00:05:20.719 Test: bdev_set_qd_sampling ...passed 00:05:20.719 Test: lba_range_overlap ...passed 00:05:20.719 Test: lock_lba_range_check_ranges ...passed 00:05:20.719 Test: lock_lba_range_with_io_outstanding ...passed 00:05:20.719 Test: lock_lba_range_overlapped ...passed 00:05:20.719 Test: bdev_quiesce ...[2024-06-11 12:50:39.410484] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:20.719 passed 00:05:20.719 Test: bdev_io_abort ...passed 00:05:20.719 Test: bdev_unmap ...passed 00:05:20.719 Test: bdev_write_zeroes_split_test ...passed 00:05:20.719 Test: bdev_set_options_test ...passed 00:05:20.719 Test: bdev_get_memory_domains ...passed 00:05:20.719 Test: bdev_io_ext ...[2024-06-11 12:50:39.513084] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:20.719 passed 00:05:20.978 Test: bdev_io_ext_no_opts ...passed 00:05:20.978 Test: bdev_io_ext_invalid_opts ...passed 00:05:20.978 Test: bdev_io_ext_split ...passed 00:05:20.978 Test: bdev_io_ext_bounce_buffer ...passed 00:05:20.978 Test: bdev_register_uuid_alias ...[2024-06-11 12:50:39.677582] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 109dd732-1cc7-4235-b092-86685722dd0c already exists 00:05:20.978 [2024-06-11 12:50:39.677644] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:109dd732-1cc7-4235-b092-86685722dd0c alias for bdev bdev0 00:05:20.978 passed 00:05:20.978 Test: bdev_unregister_by_name ...[2024-06-11 12:50:39.695144] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:20.978 passed 00:05:20.978 Test: for_each_bdev_test ...[2024-06-11 12:50:39.695255] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:20.978 passed 00:05:20.978 Test: bdev_seek_test ...passed 00:05:20.978 Test: bdev_copy ...passed 00:05:20.978 Test: bdev_copy_split_test ...passed 00:05:20.978 Test: examine_locks ...passed 00:05:20.978 Test: claim_v2_rwo ...[2024-06-11 12:50:39.788461] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:20.978 passed 00:05:20.978 Test: claim_v2_rom ...[2024-06-11 12:50:39.788544] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.788563] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.788614] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.788631] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.788680] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:20.978 [2024-06-11 12:50:39.788827] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.788875] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.788897] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.788920] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:20.978 passed 00:05:20.978 Test: claim_v2_rwm ...[2024-06-11 12:50:39.788958] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:20.978 [2024-06-11 12:50:39.788990] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:20.978 [2024-06-11 12:50:39.789102] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:20.978 [2024-06-11 12:50:39.789160] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.789191] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.789213] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.789229] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.789252] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.789286] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:20.978 passed 00:05:20.978 Test: claim_v2_existing_writer ...passed 00:05:20.978 Test: claim_v2_existing_v1 ...[2024-06-11 12:50:39.789503] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:20.978 [2024-06-11 12:50:39.789537] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:20.978 [2024-06-11 12:50:39.789659] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.789688] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:20.978 passed 00:05:20.978 Test: claim_v1_existing_v2 ...[2024-06-11 12:50:39.789707] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.789826] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.789902] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:20.978 passed 00:05:20.978 Test: examine_claimed ...[2024-06-11 12:50:39.789934] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:20.978 [2024-06-11 12:50:39.790233] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:20.978 passed 00:05:20.978 00:05:20.978 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.978 suites 1 1 n/a 0 0 00:05:20.978 tests 59 59 59 0 0 00:05:20.978 asserts 4599 4599 4599 0 n/a 00:05:20.978 00:05:20.978 Elapsed time = 1.473 seconds 00:05:21.239 12:50:39 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:21.239 00:05:21.239 00:05:21.239 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.239 http://cunit.sourceforge.net/ 00:05:21.239 00:05:21.239 00:05:21.239 Suite: nvme 00:05:21.239 Test: test_create_ctrlr ...passed 00:05:21.239 Test: test_reset_ctrlr ...[2024-06-11 12:50:39.837664] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.239 passed 00:05:21.239 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:21.239 Test: test_failover_ctrlr ...passed 00:05:21.239 Test: test_race_between_failover_and_add_secondary_trid ...[2024-06-11 12:50:39.840451] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.239 [2024-06-11 12:50:39.840683] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.239 [2024-06-11 12:50:39.840922] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.239 passed 00:05:21.239 Test: test_pending_reset ...[2024-06-11 12:50:39.842510] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.239 [2024-06-11 12:50:39.842830] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.239 passed 00:05:21.239 Test: test_attach_ctrlr ...[2024-06-11 12:50:39.843994] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:21.239 passed 00:05:21.239 Test: test_aer_cb ...passed 00:05:21.239 Test: test_submit_nvme_cmd ...passed 00:05:21.239 Test: test_add_remove_trid ...passed 00:05:21.239 Test: test_abort ...[2024-06-11 12:50:39.847510] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7221:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:21.239 passed 00:05:21.239 Test: test_get_io_qpair ...passed 00:05:21.239 Test: test_bdev_unregister ...passed 00:05:21.239 Test: test_compare_ns ...passed 00:05:21.239 Test: test_init_ana_log_page ...passed 00:05:21.239 Test: test_get_memory_domains ...passed 00:05:21.239 Test: test_reconnect_qpair ...[2024-06-11 12:50:39.850417] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.239 passed 00:05:21.239 Test: test_create_bdev_ctrlr ...[2024-06-11 12:50:39.851021] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5273:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:21.239 passed 00:05:21.239 Test: test_add_multi_ns_to_bdev ...[2024-06-11 12:50:39.852366] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4486:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:21.239 passed 00:05:21.239 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:21.240 Test: test_admin_path ...passed 00:05:21.240 Test: test_reset_bdev_ctrlr ...passed 00:05:21.240 Test: test_find_io_path ...passed 00:05:21.240 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:21.240 Test: test_retry_io_for_io_path_error ...passed 00:05:21.240 Test: test_retry_io_count ...passed 00:05:21.240 Test: test_concurrent_read_ana_log_page ...passed 00:05:21.240 Test: test_retry_io_for_ana_error ...passed 00:05:21.240 Test: test_check_io_error_resiliency_params ...[2024-06-11 12:50:39.860140] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5926:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:21.240 [2024-06-11 12:50:39.860230] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5930:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:21.240 [2024-06-11 12:50:39.860258] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5939:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:21.240 [2024-06-11 12:50:39.860286] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5942:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:21.240 [2024-06-11 12:50:39.860320] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:21.240 [2024-06-11 12:50:39.860349] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:21.240 [2024-06-11 12:50:39.860369] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5934:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:21.240 [2024-06-11 12:50:39.860423] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5949:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:21.240 [2024-06-11 12:50:39.860458] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5946:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:21.240 passed 00:05:21.240 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:21.240 Test: test_reconnect_ctrlr ...[2024-06-11 12:50:39.861409] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.861621] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.861958] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.862114] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.862271] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 passed 00:05:21.240 Test: test_retry_failover_ctrlr ...[2024-06-11 12:50:39.862695] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 passed 00:05:21.240 Test: test_fail_path ...[2024-06-11 12:50:39.863301] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.863507] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.863640] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.863790] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.863954] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 passed 00:05:21.240 Test: test_nvme_ns_cmp ...passed 00:05:21.240 Test: test_ana_transition ...passed 00:05:21.240 Test: test_set_preferred_path ...passed 00:05:21.240 Test: test_find_next_io_path ...passed 00:05:21.240 Test: test_find_io_path_min_qd ...passed 00:05:21.240 Test: test_disable_auto_failback ...[2024-06-11 12:50:39.865823] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 passed 00:05:21.240 Test: test_set_multipath_policy ...passed 00:05:21.240 Test: test_uuid_generation ...passed 00:05:21.240 Test: test_retry_io_to_same_path ...passed 00:05:21.240 Test: test_race_between_reset_and_disconnected ...passed 00:05:21.240 Test: test_ctrlr_op_rpc ...passed 00:05:21.240 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:21.240 Test: test_disable_enable_ctrlr ...[2024-06-11 12:50:39.869855] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 [2024-06-11 12:50:39.870051] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:21.240 passed 00:05:21.240 Test: test_delete_ctrlr_done ...passed 00:05:21.240 Test: test_ns_remove_during_reset ...passed 00:05:21.240 00:05:21.240 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.240 suites 1 1 n/a 0 0 00:05:21.240 tests 48 48 48 0 0 00:05:21.240 asserts 3553 3553 3553 0 n/a 00:05:21.240 00:05:21.240 Elapsed time = 0.035 seconds 00:05:21.240 12:50:39 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:21.240 Test Options 00:05:21.240 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:21.240 00:05:21.240 00:05:21.240 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.240 http://cunit.sourceforge.net/ 00:05:21.240 00:05:21.240 00:05:21.240 Suite: raid 00:05:21.240 Test: test_create_raid ...passed 00:05:21.240 Test: test_create_raid_superblock ...passed 00:05:21.240 Test: test_delete_raid ...passed 00:05:21.240 Test: test_create_raid_invalid_args ...[2024-06-11 12:50:39.913487] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:21.240 [2024-06-11 12:50:39.914005] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:21.240 [2024-06-11 12:50:39.914529] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:21.240 [2024-06-11 12:50:39.914870] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:21.240 [2024-06-11 12:50:39.915773] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:21.240 passed 00:05:21.240 Test: test_delete_raid_invalid_args ...passed 00:05:21.240 Test: test_io_channel ...passed 00:05:21.240 Test: test_reset_io ...passed 00:05:21.240 Test: test_write_io ...passed 00:05:21.240 Test: test_read_io ...passed 00:05:22.182 Test: test_unmap_io ...passed 00:05:22.182 Test: test_io_failure ...[2024-06-11 12:50:40.807027] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:22.182 passed 00:05:22.182 Test: test_multi_raid_no_io ...passed 00:05:22.182 Test: test_multi_raid_with_io ...passed 00:05:22.182 Test: test_io_type_supported ...passed 00:05:22.182 Test: test_raid_json_dump_info ...passed 00:05:22.182 Test: test_context_size ...passed 00:05:22.182 Test: test_raid_level_conversions ...passed 00:05:22.182 Test: test_raid_process ...passed 00:05:22.182 Test: test_raid_io_split ...passed 00:05:22.182 00:05:22.182 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.182 suites 1 1 n/a 0 0 00:05:22.182 tests 19 19 19 0 0 00:05:22.182 asserts 177879 177879 177879 0 n/a 00:05:22.182 00:05:22.182 Elapsed time = 0.908 seconds 00:05:22.182 12:50:40 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:22.182 00:05:22.182 00:05:22.182 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.182 http://cunit.sourceforge.net/ 00:05:22.182 00:05:22.182 00:05:22.182 Suite: raid_sb 00:05:22.183 Test: test_raid_bdev_write_superblock ...passed 00:05:22.183 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:22.183 Test: test_raid_bdev_parse_superblock ...[2024-06-11 12:50:40.859845] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:22.183 passed 00:05:22.183 00:05:22.183 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.183 suites 1 1 n/a 0 0 00:05:22.183 tests 3 3 3 0 0 00:05:22.183 asserts 32 32 32 0 n/a 00:05:22.183 00:05:22.183 Elapsed time = 0.001 seconds 00:05:22.183 12:50:40 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:22.183 00:05:22.183 00:05:22.183 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.183 http://cunit.sourceforge.net/ 00:05:22.183 00:05:22.183 00:05:22.183 Suite: concat 00:05:22.183 Test: test_concat_start ...passed 00:05:22.183 Test: test_concat_rw ...passed 00:05:22.183 Test: test_concat_null_payload ...passed 00:05:22.183 00:05:22.183 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.183 suites 1 1 n/a 0 0 00:05:22.183 tests 3 3 3 0 0 00:05:22.183 asserts 8097 8097 8097 0 n/a 00:05:22.183 00:05:22.183 Elapsed time = 0.007 seconds 00:05:22.183 12:50:40 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:22.183 00:05:22.183 00:05:22.183 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.183 http://cunit.sourceforge.net/ 00:05:22.183 00:05:22.183 00:05:22.183 Suite: raid1 00:05:22.183 Test: test_raid1_start ...passed 00:05:22.183 Test: test_raid1_read_balancing ...passed 00:05:22.183 00:05:22.183 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.183 suites 1 1 n/a 0 0 00:05:22.183 tests 2 2 2 0 0 00:05:22.183 asserts 2856 2856 2856 0 n/a 00:05:22.183 00:05:22.183 Elapsed time = 0.004 seconds 00:05:22.183 12:50:40 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:22.183 00:05:22.183 00:05:22.183 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.183 http://cunit.sourceforge.net/ 00:05:22.183 00:05:22.183 00:05:22.183 Suite: zone 00:05:22.183 Test: test_zone_get_operation ...passed 00:05:22.183 Test: test_bdev_zone_get_info ...passed 00:05:22.183 Test: test_bdev_zone_management ...passed 00:05:22.183 Test: test_bdev_zone_append ...passed 00:05:22.183 Test: test_bdev_zone_append_with_md ...passed 00:05:22.183 Test: test_bdev_zone_appendv ...passed 00:05:22.183 Test: test_bdev_zone_appendv_with_md ...passed 00:05:22.183 Test: test_bdev_io_get_append_location ...passed 00:05:22.183 00:05:22.183 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.183 suites 1 1 n/a 0 0 00:05:22.183 tests 8 8 8 0 0 00:05:22.183 asserts 94 94 94 0 n/a 00:05:22.183 00:05:22.183 Elapsed time = 0.000 seconds 00:05:22.183 12:50:40 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:22.183 00:05:22.183 00:05:22.183 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.183 http://cunit.sourceforge.net/ 00:05:22.183 00:05:22.183 00:05:22.183 Suite: gpt_parse 00:05:22.183 Test: test_parse_mbr_and_primary ...[2024-06-11 12:50:40.987468] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:22.183 [2024-06-11 12:50:40.987784] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:22.183 [2024-06-11 12:50:40.987824] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:22.183 [2024-06-11 12:50:40.987874] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:22.183 [2024-06-11 12:50:40.987908] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:22.183 [2024-06-11 12:50:40.987959] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:22.183 passed 00:05:22.183 Test: test_parse_secondary ...[2024-06-11 12:50:40.988620] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:22.183 [2024-06-11 12:50:40.988659] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:22.183 [2024-06-11 12:50:40.988683] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:22.183 [2024-06-11 12:50:40.988704] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:22.183 passed 00:05:22.183 Test: test_check_mbr ...[2024-06-11 12:50:40.989338] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:22.183 [2024-06-11 12:50:40.989406] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:22.183 passed 00:05:22.183 Test: test_read_header ...[2024-06-11 12:50:40.989483] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:22.183 [2024-06-11 12:50:40.989553] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:22.183 [2024-06-11 12:50:40.989608] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:22.183 [2024-06-11 12:50:40.989651] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:22.183 [2024-06-11 12:50:40.989680] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:22.183 [2024-06-11 12:50:40.989701] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:22.183 passed 00:05:22.183 Test: test_read_partitions ...[2024-06-11 12:50:40.989750] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:22.183 [2024-06-11 12:50:40.989784] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:22.183 [2024-06-11 12:50:40.989807] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:22.184 [2024-06-11 12:50:40.989837] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:22.184 [2024-06-11 12:50:40.990131] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:22.184 passed 00:05:22.184 00:05:22.184 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.184 suites 1 1 n/a 0 0 00:05:22.184 tests 5 5 5 0 0 00:05:22.184 asserts 33 33 33 0 n/a 00:05:22.184 00:05:22.184 Elapsed time = 0.003 seconds 00:05:22.184 12:50:41 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:22.443 00:05:22.443 00:05:22.443 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.443 http://cunit.sourceforge.net/ 00:05:22.443 00:05:22.443 00:05:22.443 Suite: bdev_part 00:05:22.443 Test: part_test ...[2024-06-11 12:50:41.026944] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:22.443 passed 00:05:22.443 Test: part_free_test ...passed 00:05:22.443 Test: part_get_io_channel_test ...passed 00:05:22.443 Test: part_construct_ext ...passed 00:05:22.443 00:05:22.443 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.444 suites 1 1 n/a 0 0 00:05:22.444 tests 4 4 4 0 0 00:05:22.444 asserts 48 48 48 0 n/a 00:05:22.444 00:05:22.444 Elapsed time = 0.050 seconds 00:05:22.444 12:50:41 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:22.444 00:05:22.444 00:05:22.444 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.444 http://cunit.sourceforge.net/ 00:05:22.444 00:05:22.444 00:05:22.444 Suite: scsi_nvme_suite 00:05:22.444 Test: scsi_nvme_translate_test ...passed 00:05:22.444 00:05:22.444 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.444 suites 1 1 n/a 0 0 00:05:22.444 tests 1 1 1 0 0 00:05:22.444 asserts 104 104 104 0 n/a 00:05:22.444 00:05:22.444 Elapsed time = 0.000 seconds 00:05:22.444 12:50:41 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:22.444 00:05:22.444 00:05:22.444 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.444 http://cunit.sourceforge.net/ 00:05:22.444 00:05:22.444 00:05:22.444 Suite: lvol 00:05:22.444 Test: ut_lvs_init ...[2024-06-11 12:50:41.141762] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:22.444 [2024-06-11 12:50:41.142379] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:22.444 passed 00:05:22.444 Test: ut_lvol_init ...passed 00:05:22.444 Test: ut_lvol_snapshot ...passed 00:05:22.444 Test: ut_lvol_clone ...passed 00:05:22.444 Test: ut_lvs_destroy ...passed 00:05:22.444 Test: ut_lvs_unload ...passed 00:05:22.444 Test: ut_lvol_resize ...[2024-06-11 12:50:41.144833] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:22.444 passed 00:05:22.444 Test: ut_lvol_set_read_only ...passed 00:05:22.444 Test: ut_lvol_hotremove ...passed 00:05:22.444 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:22.444 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:22.444 Test: ut_lvol_read_write ...passed 00:05:22.444 Test: ut_vbdev_lvol_submit_request ...passed 00:05:22.444 Test: ut_lvol_examine_config ...passed 00:05:22.444 Test: ut_lvol_examine_disk ...[2024-06-11 12:50:41.146502] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:22.444 passed 00:05:22.444 Test: ut_lvol_rename ...[2024-06-11 12:50:41.147861] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:22.444 [2024-06-11 12:50:41.147966] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:22.444 passed 00:05:22.444 Test: ut_bdev_finish ...passed 00:05:22.444 Test: ut_lvs_rename ...passed 00:05:22.444 Test: ut_lvol_seek ...passed 00:05:22.444 Test: ut_esnap_dev_create ...[2024-06-11 12:50:41.149572] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:22.444 [2024-06-11 12:50:41.149655] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:22.444 [2024-06-11 12:50:41.149891] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:22.444 [2024-06-11 12:50:41.149944] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:22.444 passed 00:05:22.444 Test: ut_lvol_esnap_clone_bad_args ...[2024-06-11 12:50:41.150533] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:22.444 [2024-06-11 12:50:41.150779] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:22.444 passed 00:05:22.444 00:05:22.444 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.444 suites 1 1 n/a 0 0 00:05:22.444 tests 21 21 21 0 0 00:05:22.444 asserts 712 712 712 0 n/a 00:05:22.444 00:05:22.444 Elapsed time = 0.010 seconds 00:05:22.444 12:50:41 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:22.444 00:05:22.444 00:05:22.444 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.444 http://cunit.sourceforge.net/ 00:05:22.444 00:05:22.444 00:05:22.444 Suite: zone_block 00:05:22.444 Test: test_zone_block_create ...passed 00:05:22.444 Test: test_zone_block_create_invalid ...[2024-06-11 12:50:41.206974] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:22.444 [2024-06-11 12:50:41.207331] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-11 12:50:41.207528] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:22.444 [2024-06-11 12:50:41.207595] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-11 12:50:41.207747] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:22.444 [2024-06-11 12:50:41.207790] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-06-11 12:50:41.207871] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:22.444 [2024-06-11 12:50:41.207918] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:22.444 Test: test_get_zone_info ...[2024-06-11 12:50:41.208452] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.208528] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.208581] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 passed 00:05:22.444 Test: test_supported_io_types ...passed 00:05:22.444 Test: test_reset_zone ...[2024-06-11 12:50:41.209473] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.209546] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 passed 00:05:22.444 Test: test_open_zone ...[2024-06-11 12:50:41.210000] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.210716] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.210791] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 passed 00:05:22.444 Test: test_zone_write ...[2024-06-11 12:50:41.211282] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:22.444 [2024-06-11 12:50:41.211342] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.211396] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:22.444 [2024-06-11 12:50:41.211439] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.216959] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:22.444 [2024-06-11 12:50:41.217016] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.217096] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:22.444 [2024-06-11 12:50:41.217119] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 [2024-06-11 12:50:41.223021] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:22.444 [2024-06-11 12:50:41.223093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.444 passed 00:05:22.445 Test: test_zone_read ...[2024-06-11 12:50:41.223601] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:22.445 [2024-06-11 12:50:41.223647] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 [2024-06-11 12:50:41.223727] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:22.445 [2024-06-11 12:50:41.223773] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 [2024-06-11 12:50:41.224265] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:22.445 [2024-06-11 12:50:41.224309] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 passed 00:05:22.445 Test: test_close_zone ...[2024-06-11 12:50:41.224746] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 [2024-06-11 12:50:41.224857] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 [2024-06-11 12:50:41.225119] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 [2024-06-11 12:50:41.225177] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 passed 00:05:22.445 Test: test_finish_zone ...[2024-06-11 12:50:41.225873] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 [2024-06-11 12:50:41.225939] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 passed 00:05:22.445 Test: test_append_zone ...[2024-06-11 12:50:41.226341] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:22.445 [2024-06-11 12:50:41.226399] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 [2024-06-11 12:50:41.226471] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:22.445 [2024-06-11 12:50:41.226496] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 [2024-06-11 12:50:41.237700] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:22.445 [2024-06-11 12:50:41.237754] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:22.445 passed 00:05:22.445 00:05:22.445 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.445 suites 1 1 n/a 0 0 00:05:22.445 tests 11 11 11 0 0 00:05:22.445 asserts 3437 3437 3437 0 n/a 00:05:22.445 00:05:22.445 Elapsed time = 0.032 seconds 00:05:22.445 12:50:41 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:22.703 00:05:22.703 00:05:22.703 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.703 http://cunit.sourceforge.net/ 00:05:22.703 00:05:22.703 00:05:22.703 Suite: bdev 00:05:22.703 Test: basic ...[2024-06-11 12:50:41.318753] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55da0a902401): Operation not permitted (rc=-1) 00:05:22.703 [2024-06-11 12:50:41.319140] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55da0a9023c0): Operation not permitted (rc=-1) 00:05:22.703 [2024-06-11 12:50:41.319279] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55da0a902401): Operation not permitted (rc=-1) 00:05:22.703 passed 00:05:22.703 Test: unregister_and_close ...passed 00:05:22.703 Test: unregister_and_close_different_threads ...passed 00:05:22.703 Test: basic_qos ...passed 00:05:22.703 Test: put_channel_during_reset ...passed 00:05:22.703 Test: aborted_reset ...passed 00:05:22.962 Test: aborted_reset_no_outstanding_io ...passed 00:05:22.962 Test: io_during_reset ...passed 00:05:22.962 Test: reset_completions ...passed 00:05:22.962 Test: io_during_qos_queue ...passed 00:05:22.962 Test: io_during_qos_reset ...passed 00:05:22.962 Test: enomem ...passed 00:05:23.220 Test: enomem_multi_bdev ...passed 00:05:23.220 Test: enomem_multi_bdev_unregister ...passed 00:05:23.220 Test: enomem_multi_io_target ...passed 00:05:23.220 Test: qos_dynamic_enable ...passed 00:05:23.220 Test: bdev_histograms_mt ...passed 00:05:23.220 Test: bdev_set_io_timeout_mt ...[2024-06-11 12:50:42.014107] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:23.220 passed 00:05:23.220 Test: lock_lba_range_then_submit_io ...[2024-06-11 12:50:42.035078] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55da0a902380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:23.486 passed 00:05:23.486 Test: unregister_during_reset ...passed 00:05:23.486 Test: event_notify_and_close ...passed 00:05:23.486 Test: unregister_and_qos_poller ...passed 00:05:23.486 Suite: bdev_wrong_thread 00:05:23.486 Test: spdk_bdev_register_wt ...[2024-06-11 12:50:42.197660] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:05:23.486 passed 00:05:23.486 Test: spdk_bdev_examine_wt ...[2024-06-11 12:50:42.198366] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:05:23.486 passed 00:05:23.486 00:05:23.486 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.486 suites 2 2 n/a 0 0 00:05:23.487 tests 24 24 24 0 0 00:05:23.487 asserts 621 621 621 0 n/a 00:05:23.487 00:05:23.487 Elapsed time = 0.895 seconds 00:05:23.487 00:05:23.487 real 0m3.926s 00:05:23.487 user 0m1.882s 00:05:23.487 sys 0m2.040s 00:05:23.487 12:50:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.487 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.487 ************************************ 00:05:23.487 END TEST unittest_bdev 00:05:23.487 ************************************ 00:05:23.487 12:50:42 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:23.487 12:50:42 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:23.487 12:50:42 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:23.487 12:50:42 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:23.487 12:50:42 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:23.487 12:50:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.487 12:50:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.487 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.487 ************************************ 00:05:23.487 START TEST unittest_bdev_raid5f 00:05:23.487 ************************************ 00:05:23.487 12:50:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:23.487 00:05:23.487 00:05:23.487 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.487 http://cunit.sourceforge.net/ 00:05:23.487 00:05:23.487 00:05:23.487 Suite: raid5f 00:05:23.487 Test: test_raid5f_start ...passed 00:05:24.058 Test: test_raid5f_submit_read_request ...passed 00:05:24.316 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:05:28.498 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:05:46.581 Test: test_raid5f_chunk_write_error ...passed 00:05:53.187 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:05:56.476 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:06:23.042 Test: test_raid5f_submit_read_request_degraded ...passed 00:06:23.042 00:06:23.042 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.042 suites 1 1 n/a 0 0 00:06:23.042 tests 8 8 8 0 0 00:06:23.042 asserts 351864 351864 351864 0 n/a 00:06:23.042 00:06:23.042 Elapsed time = 58.953 seconds 00:06:23.042 00:06:23.042 real 0m59.053s 00:06:23.042 user 0m56.267s 00:06:23.042 sys 0m2.758s 00:06:23.042 12:51:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.042 ************************************ 00:06:23.042 END TEST unittest_bdev_raid5f 00:06:23.042 ************************************ 00:06:23.042 12:51:41 -- common/autotest_common.sh@10 -- # set +x 00:06:23.042 12:51:41 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:06:23.042 12:51:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.042 12:51:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.042 12:51:41 -- common/autotest_common.sh@10 -- # set +x 00:06:23.042 ************************************ 00:06:23.042 START TEST unittest_blob_blobfs 00:06:23.042 ************************************ 00:06:23.042 12:51:41 -- common/autotest_common.sh@1104 -- # unittest_blob 00:06:23.042 12:51:41 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:23.042 12:51:41 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:23.042 00:06:23.042 00:06:23.042 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.042 http://cunit.sourceforge.net/ 00:06:23.042 00:06:23.042 00:06:23.042 Suite: blob_nocopy_noextent 00:06:23.042 Test: blob_init ...[2024-06-11 12:51:41.416910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:23.042 passed 00:06:23.042 Test: blob_thin_provision ...passed 00:06:23.042 Test: blob_read_only ...passed 00:06:23.042 Test: bs_load ...[2024-06-11 12:51:41.517774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:23.042 passed 00:06:23.042 Test: bs_load_custom_cluster_size ...passed 00:06:23.042 Test: bs_load_after_failed_grow ...passed 00:06:23.042 Test: bs_cluster_sz ...[2024-06-11 12:51:41.554667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:23.042 [2024-06-11 12:51:41.555535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:23.042 [2024-06-11 12:51:41.555754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:23.042 passed 00:06:23.042 Test: bs_resize_md ...passed 00:06:23.042 Test: bs_destroy ...passed 00:06:23.042 Test: bs_type ...passed 00:06:23.042 Test: bs_super_block ...passed 00:06:23.042 Test: bs_test_recover_cluster_count ...passed 00:06:23.042 Test: bs_grow_live ...passed 00:06:23.042 Test: bs_grow_live_no_space ...passed 00:06:23.042 Test: bs_test_grow ...passed 00:06:23.042 Test: blob_serialize_test ...passed 00:06:23.042 Test: super_block_crc ...passed 00:06:23.042 Test: blob_thin_prov_write_count_io ...passed 00:06:23.042 Test: bs_load_iter_test ...passed 00:06:23.042 Test: blob_relations ...[2024-06-11 12:51:41.738930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:23.042 [2024-06-11 12:51:41.739062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.042 [2024-06-11 12:51:41.740154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:23.042 [2024-06-11 12:51:41.740246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.042 passed 00:06:23.042 Test: blob_relations2 ...[2024-06-11 12:51:41.755293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:23.042 [2024-06-11 12:51:41.755364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.042 [2024-06-11 12:51:41.755422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:23.042 [2024-06-11 12:51:41.755442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.042 [2024-06-11 12:51:41.757071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:23.042 [2024-06-11 12:51:41.757159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.042 [2024-06-11 12:51:41.757782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:23.042 [2024-06-11 12:51:41.757868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.042 passed 00:06:23.042 Test: blob_relations3 ...passed 00:06:23.302 Test: blobstore_clean_power_failure ...passed 00:06:23.302 Test: blob_delete_snapshot_power_failure ...[2024-06-11 12:51:41.914964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:23.302 [2024-06-11 12:51:41.927468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:23.302 [2024-06-11 12:51:41.927560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:23.302 [2024-06-11 12:51:41.927625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.302 [2024-06-11 12:51:41.940291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:23.302 [2024-06-11 12:51:41.940373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:23.302 [2024-06-11 12:51:41.940434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:23.302 [2024-06-11 12:51:41.940465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.302 [2024-06-11 12:51:41.959824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:23.302 [2024-06-11 12:51:41.960008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.302 [2024-06-11 12:51:41.976490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:23.302 [2024-06-11 12:51:41.976638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.302 [2024-06-11 12:51:41.992043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:23.302 [2024-06-11 12:51:41.992163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:23.302 passed 00:06:23.302 Test: blob_create_snapshot_power_failure ...[2024-06-11 12:51:42.032061] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:23.302 [2024-06-11 12:51:42.058138] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:23.302 [2024-06-11 12:51:42.071363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:23.302 passed 00:06:23.302 Test: blob_io_unit ...passed 00:06:23.560 Test: blob_io_unit_compatibility ...passed 00:06:23.560 Test: blob_ext_md_pages ...passed 00:06:23.560 Test: blob_esnap_io_4096_4096 ...passed 00:06:23.561 Test: blob_esnap_io_512_512 ...passed 00:06:23.561 Test: blob_esnap_io_4096_512 ...passed 00:06:23.561 Test: blob_esnap_io_512_4096 ...passed 00:06:23.561 Suite: blob_bs_nocopy_noextent 00:06:23.561 Test: blob_open ...passed 00:06:23.561 Test: blob_create ...[2024-06-11 12:51:42.329826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:23.561 passed 00:06:23.819 Test: blob_create_loop ...passed 00:06:23.819 Test: blob_create_fail ...[2024-06-11 12:51:42.429260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:23.819 passed 00:06:23.819 Test: blob_create_internal ...passed 00:06:23.819 Test: blob_create_zero_extent ...passed 00:06:23.819 Test: blob_snapshot ...passed 00:06:23.819 Test: blob_clone ...passed 00:06:23.819 Test: blob_inflate ...[2024-06-11 12:51:42.639339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:23.819 passed 00:06:24.077 Test: blob_delete ...passed 00:06:24.077 Test: blob_resize_test ...[2024-06-11 12:51:42.716521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:24.077 passed 00:06:24.077 Test: channel_ops ...passed 00:06:24.077 Test: blob_super ...passed 00:06:24.077 Test: blob_rw_verify_iov ...passed 00:06:24.077 Test: blob_unmap ...passed 00:06:24.336 Test: blob_iter ...passed 00:06:24.336 Test: blob_parse_md ...passed 00:06:24.336 Test: bs_load_pending_removal ...passed 00:06:24.336 Test: bs_unload ...[2024-06-11 12:51:43.049808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:24.336 passed 00:06:24.336 Test: bs_usable_clusters ...passed 00:06:24.336 Test: blob_crc ...[2024-06-11 12:51:43.127458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:24.336 [2024-06-11 12:51:43.127814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:24.336 passed 00:06:24.595 Test: blob_flags ...passed 00:06:24.595 Test: bs_version ...passed 00:06:24.595 Test: blob_set_xattrs_test ...[2024-06-11 12:51:43.244793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:24.595 [2024-06-11 12:51:43.245126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:24.595 passed 00:06:24.595 Test: blob_thin_prov_alloc ...passed 00:06:24.595 Test: blob_insert_cluster_msg_test ...passed 00:06:24.854 Test: blob_thin_prov_rw ...passed 00:06:24.854 Test: blob_thin_prov_rle ...passed 00:06:24.854 Test: blob_thin_prov_rw_iov ...passed 00:06:24.854 Test: blob_snapshot_rw ...passed 00:06:24.854 Test: blob_snapshot_rw_iov ...passed 00:06:25.112 Test: blob_inflate_rw ...passed 00:06:25.112 Test: blob_snapshot_freeze_io ...passed 00:06:25.112 Test: blob_operation_split_rw ...passed 00:06:25.371 Test: blob_operation_split_rw_iov ...passed 00:06:25.371 Test: blob_simultaneous_operations ...[2024-06-11 12:51:44.105090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:25.371 [2024-06-11 12:51:44.105446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:25.371 [2024-06-11 12:51:44.106620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:25.371 [2024-06-11 12:51:44.106824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:25.371 [2024-06-11 12:51:44.116713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:25.371 [2024-06-11 12:51:44.116887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:25.371 [2024-06-11 12:51:44.117040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:25.371 [2024-06-11 12:51:44.117234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:25.371 passed 00:06:25.371 Test: blob_persist_test ...passed 00:06:25.630 Test: blob_decouple_snapshot ...passed 00:06:25.630 Test: blob_seek_io_unit ...passed 00:06:25.630 Test: blob_nested_freezes ...passed 00:06:25.630 Suite: blob_blob_nocopy_noextent 00:06:25.630 Test: blob_write ...passed 00:06:25.630 Test: blob_read ...passed 00:06:25.630 Test: blob_rw_verify ...passed 00:06:25.630 Test: blob_rw_verify_iov_nomem ...passed 00:06:25.630 Test: blob_rw_iov_read_only ...passed 00:06:25.887 Test: blob_xattr ...passed 00:06:25.887 Test: blob_dirty_shutdown ...passed 00:06:25.887 Test: blob_is_degraded ...passed 00:06:25.887 Suite: blob_esnap_bs_nocopy_noextent 00:06:25.887 Test: blob_esnap_create ...passed 00:06:25.887 Test: blob_esnap_thread_add_remove ...passed 00:06:25.887 Test: blob_esnap_clone_snapshot ...passed 00:06:25.887 Test: blob_esnap_clone_inflate ...passed 00:06:26.145 Test: blob_esnap_clone_decouple ...passed 00:06:26.145 Test: blob_esnap_clone_reload ...passed 00:06:26.145 Test: blob_esnap_hotplug ...passed 00:06:26.145 Suite: blob_nocopy_extent 00:06:26.145 Test: blob_init ...[2024-06-11 12:51:44.792592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:26.145 passed 00:06:26.145 Test: blob_thin_provision ...passed 00:06:26.145 Test: blob_read_only ...passed 00:06:26.145 Test: bs_load ...[2024-06-11 12:51:44.844253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:26.145 passed 00:06:26.145 Test: bs_load_custom_cluster_size ...passed 00:06:26.145 Test: bs_load_after_failed_grow ...passed 00:06:26.145 Test: bs_cluster_sz ...[2024-06-11 12:51:44.868890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:26.145 [2024-06-11 12:51:44.869248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:26.145 [2024-06-11 12:51:44.869316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:26.145 passed 00:06:26.145 Test: bs_resize_md ...passed 00:06:26.145 Test: bs_destroy ...passed 00:06:26.145 Test: bs_type ...passed 00:06:26.145 Test: bs_super_block ...passed 00:06:26.145 Test: bs_test_recover_cluster_count ...passed 00:06:26.145 Test: bs_grow_live ...passed 00:06:26.145 Test: bs_grow_live_no_space ...passed 00:06:26.145 Test: bs_test_grow ...passed 00:06:26.145 Test: blob_serialize_test ...passed 00:06:26.145 Test: super_block_crc ...passed 00:06:26.403 Test: blob_thin_prov_write_count_io ...passed 00:06:26.403 Test: bs_load_iter_test ...passed 00:06:26.403 Test: blob_relations ...[2024-06-11 12:51:45.011460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:26.403 [2024-06-11 12:51:45.011575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.403 [2024-06-11 12:51:45.012559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:26.403 [2024-06-11 12:51:45.012641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.403 passed 00:06:26.403 Test: blob_relations2 ...[2024-06-11 12:51:45.025998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:26.403 [2024-06-11 12:51:45.026071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.403 [2024-06-11 12:51:45.026126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:26.403 [2024-06-11 12:51:45.026160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.403 [2024-06-11 12:51:45.027501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:26.403 [2024-06-11 12:51:45.027569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.403 [2024-06-11 12:51:45.027993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:26.403 [2024-06-11 12:51:45.028049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.403 passed 00:06:26.403 Test: blob_relations3 ...passed 00:06:26.403 Test: blobstore_clean_power_failure ...passed 00:06:26.403 Test: blob_delete_snapshot_power_failure ...[2024-06-11 12:51:45.183422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:26.403 [2024-06-11 12:51:45.195447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:26.403 [2024-06-11 12:51:45.207602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:26.403 [2024-06-11 12:51:45.207697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:26.403 [2024-06-11 12:51:45.207729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.403 [2024-06-11 12:51:45.219863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:26.403 [2024-06-11 12:51:45.219955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:26.403 [2024-06-11 12:51:45.220007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:26.403 [2024-06-11 12:51:45.220041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.403 [2024-06-11 12:51:45.232335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:26.403 [2024-06-11 12:51:45.232428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:26.404 [2024-06-11 12:51:45.232463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:26.404 [2024-06-11 12:51:45.232507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.662 [2024-06-11 12:51:45.245424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:26.662 [2024-06-11 12:51:45.245550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.662 [2024-06-11 12:51:45.257991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:26.662 [2024-06-11 12:51:45.258122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.662 [2024-06-11 12:51:45.270212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:26.662 [2024-06-11 12:51:45.270314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:26.662 passed 00:06:26.662 Test: blob_create_snapshot_power_failure ...[2024-06-11 12:51:45.305987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:26.662 [2024-06-11 12:51:45.318806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:26.662 [2024-06-11 12:51:45.343224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:26.662 [2024-06-11 12:51:45.356158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:26.662 passed 00:06:26.662 Test: blob_io_unit ...passed 00:06:26.662 Test: blob_io_unit_compatibility ...passed 00:06:26.662 Test: blob_ext_md_pages ...passed 00:06:26.662 Test: blob_esnap_io_4096_4096 ...passed 00:06:26.662 Test: blob_esnap_io_512_512 ...passed 00:06:26.920 Test: blob_esnap_io_4096_512 ...passed 00:06:26.920 Test: blob_esnap_io_512_4096 ...passed 00:06:26.920 Suite: blob_bs_nocopy_extent 00:06:26.920 Test: blob_open ...passed 00:06:26.920 Test: blob_create ...[2024-06-11 12:51:45.586812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:26.920 passed 00:06:26.920 Test: blob_create_loop ...passed 00:06:26.920 Test: blob_create_fail ...[2024-06-11 12:51:45.682495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:26.920 passed 00:06:26.920 Test: blob_create_internal ...passed 00:06:26.920 Test: blob_create_zero_extent ...passed 00:06:27.179 Test: blob_snapshot ...passed 00:06:27.179 Test: blob_clone ...passed 00:06:27.179 Test: blob_inflate ...[2024-06-11 12:51:45.847067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:27.179 passed 00:06:27.179 Test: blob_delete ...passed 00:06:27.179 Test: blob_resize_test ...[2024-06-11 12:51:45.909890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:27.179 passed 00:06:27.179 Test: channel_ops ...passed 00:06:27.179 Test: blob_super ...passed 00:06:27.179 Test: blob_rw_verify_iov ...passed 00:06:27.437 Test: blob_unmap ...passed 00:06:27.437 Test: blob_iter ...passed 00:06:27.437 Test: blob_parse_md ...passed 00:06:27.437 Test: bs_load_pending_removal ...passed 00:06:27.437 Test: bs_unload ...[2024-06-11 12:51:46.154182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:27.437 passed 00:06:27.437 Test: bs_usable_clusters ...passed 00:06:27.437 Test: blob_crc ...[2024-06-11 12:51:46.221040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:27.437 [2024-06-11 12:51:46.221161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:27.437 passed 00:06:27.437 Test: blob_flags ...passed 00:06:27.696 Test: bs_version ...passed 00:06:27.696 Test: blob_set_xattrs_test ...[2024-06-11 12:51:46.316884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:27.696 [2024-06-11 12:51:46.317232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:27.696 passed 00:06:27.696 Test: blob_thin_prov_alloc ...passed 00:06:27.696 Test: blob_insert_cluster_msg_test ...passed 00:06:27.696 Test: blob_thin_prov_rw ...passed 00:06:27.954 Test: blob_thin_prov_rle ...passed 00:06:27.954 Test: blob_thin_prov_rw_iov ...passed 00:06:27.954 Test: blob_snapshot_rw ...passed 00:06:27.954 Test: blob_snapshot_rw_iov ...passed 00:06:28.213 Test: blob_inflate_rw ...passed 00:06:28.213 Test: blob_snapshot_freeze_io ...passed 00:06:28.213 Test: blob_operation_split_rw ...passed 00:06:28.472 Test: blob_operation_split_rw_iov ...passed 00:06:28.472 Test: blob_simultaneous_operations ...[2024-06-11 12:51:47.193097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:28.472 [2024-06-11 12:51:47.193693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:28.472 [2024-06-11 12:51:47.194957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:28.472 [2024-06-11 12:51:47.195210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:28.472 [2024-06-11 12:51:47.205121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:28.472 [2024-06-11 12:51:47.205449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:28.472 [2024-06-11 12:51:47.205960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:28.472 [2024-06-11 12:51:47.206145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:28.472 passed 00:06:28.472 Test: blob_persist_test ...passed 00:06:28.731 Test: blob_decouple_snapshot ...passed 00:06:28.731 Test: blob_seek_io_unit ...passed 00:06:28.731 Test: blob_nested_freezes ...passed 00:06:28.731 Suite: blob_blob_nocopy_extent 00:06:28.731 Test: blob_write ...passed 00:06:28.731 Test: blob_read ...passed 00:06:28.731 Test: blob_rw_verify ...passed 00:06:28.731 Test: blob_rw_verify_iov_nomem ...passed 00:06:28.990 Test: blob_rw_iov_read_only ...passed 00:06:28.990 Test: blob_xattr ...passed 00:06:28.990 Test: blob_dirty_shutdown ...passed 00:06:28.990 Test: blob_is_degraded ...passed 00:06:28.990 Suite: blob_esnap_bs_nocopy_extent 00:06:28.990 Test: blob_esnap_create ...passed 00:06:28.990 Test: blob_esnap_thread_add_remove ...passed 00:06:28.990 Test: blob_esnap_clone_snapshot ...passed 00:06:28.990 Test: blob_esnap_clone_inflate ...passed 00:06:29.249 Test: blob_esnap_clone_decouple ...passed 00:06:29.249 Test: blob_esnap_clone_reload ...passed 00:06:29.249 Test: blob_esnap_hotplug ...passed 00:06:29.249 Suite: blob_copy_noextent 00:06:29.249 Test: blob_init ...[2024-06-11 12:51:47.905339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:29.249 passed 00:06:29.249 Test: blob_thin_provision ...passed 00:06:29.249 Test: blob_read_only ...passed 00:06:29.249 Test: bs_load ...[2024-06-11 12:51:47.949780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:29.249 passed 00:06:29.249 Test: bs_load_custom_cluster_size ...passed 00:06:29.249 Test: bs_load_after_failed_grow ...passed 00:06:29.249 Test: bs_cluster_sz ...[2024-06-11 12:51:47.974367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:29.249 [2024-06-11 12:51:47.974603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:29.249 [2024-06-11 12:51:47.974739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:29.249 passed 00:06:29.249 Test: bs_resize_md ...passed 00:06:29.249 Test: bs_destroy ...passed 00:06:29.249 Test: bs_type ...passed 00:06:29.249 Test: bs_super_block ...passed 00:06:29.249 Test: bs_test_recover_cluster_count ...passed 00:06:29.249 Test: bs_grow_live ...passed 00:06:29.249 Test: bs_grow_live_no_space ...passed 00:06:29.249 Test: bs_test_grow ...passed 00:06:29.249 Test: blob_serialize_test ...passed 00:06:29.508 Test: super_block_crc ...passed 00:06:29.508 Test: blob_thin_prov_write_count_io ...passed 00:06:29.508 Test: bs_load_iter_test ...passed 00:06:29.508 Test: blob_relations ...[2024-06-11 12:51:48.136614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.508 [2024-06-11 12:51:48.136921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.508 [2024-06-11 12:51:48.137745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.508 [2024-06-11 12:51:48.137918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.508 passed 00:06:29.508 Test: blob_relations2 ...[2024-06-11 12:51:48.152287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.508 [2024-06-11 12:51:48.152539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.508 [2024-06-11 12:51:48.152616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.508 [2024-06-11 12:51:48.152742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.508 [2024-06-11 12:51:48.153951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.508 [2024-06-11 12:51:48.154144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.508 [2024-06-11 12:51:48.154665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.508 [2024-06-11 12:51:48.154841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.508 passed 00:06:29.508 Test: blob_relations3 ...passed 00:06:29.508 Test: blobstore_clean_power_failure ...passed 00:06:29.508 Test: blob_delete_snapshot_power_failure ...[2024-06-11 12:51:48.320290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:29.508 [2024-06-11 12:51:48.332358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:29.508 [2024-06-11 12:51:48.332668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:29.508 [2024-06-11 12:51:48.332734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.767 [2024-06-11 12:51:48.345265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:29.767 [2024-06-11 12:51:48.345569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:29.767 [2024-06-11 12:51:48.345649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:29.767 [2024-06-11 12:51:48.345861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.767 [2024-06-11 12:51:48.358176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:29.767 [2024-06-11 12:51:48.358485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.767 [2024-06-11 12:51:48.370275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:29.767 [2024-06-11 12:51:48.370553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.767 [2024-06-11 12:51:48.382088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:29.767 [2024-06-11 12:51:48.382373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.767 passed 00:06:29.767 Test: blob_create_snapshot_power_failure ...[2024-06-11 12:51:48.416269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:29.767 [2024-06-11 12:51:48.440470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:29.767 [2024-06-11 12:51:48.452195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:29.767 passed 00:06:29.767 Test: blob_io_unit ...passed 00:06:29.767 Test: blob_io_unit_compatibility ...passed 00:06:29.767 Test: blob_ext_md_pages ...passed 00:06:29.767 Test: blob_esnap_io_4096_4096 ...passed 00:06:29.767 Test: blob_esnap_io_512_512 ...passed 00:06:30.026 Test: blob_esnap_io_4096_512 ...passed 00:06:30.026 Test: blob_esnap_io_512_4096 ...passed 00:06:30.026 Suite: blob_bs_copy_noextent 00:06:30.026 Test: blob_open ...passed 00:06:30.026 Test: blob_create ...[2024-06-11 12:51:48.682323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:30.026 passed 00:06:30.026 Test: blob_create_loop ...passed 00:06:30.026 Test: blob_create_fail ...[2024-06-11 12:51:48.770927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:30.026 passed 00:06:30.026 Test: blob_create_internal ...passed 00:06:30.285 Test: blob_create_zero_extent ...passed 00:06:30.285 Test: blob_snapshot ...passed 00:06:30.285 Test: blob_clone ...passed 00:06:30.285 Test: blob_inflate ...[2024-06-11 12:51:48.969982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:30.285 passed 00:06:30.285 Test: blob_delete ...passed 00:06:30.285 Test: blob_resize_test ...[2024-06-11 12:51:49.032218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:30.285 passed 00:06:30.285 Test: channel_ops ...passed 00:06:30.285 Test: blob_super ...passed 00:06:30.544 Test: blob_rw_verify_iov ...passed 00:06:30.544 Test: blob_unmap ...passed 00:06:30.544 Test: blob_iter ...passed 00:06:30.544 Test: blob_parse_md ...passed 00:06:30.544 Test: bs_load_pending_removal ...passed 00:06:30.544 Test: bs_unload ...[2024-06-11 12:51:49.291280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:30.544 passed 00:06:30.544 Test: bs_usable_clusters ...passed 00:06:30.544 Test: blob_crc ...[2024-06-11 12:51:49.372158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:30.544 [2024-06-11 12:51:49.372476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:30.803 passed 00:06:30.803 Test: blob_flags ...passed 00:06:30.803 Test: bs_version ...passed 00:06:30.803 Test: blob_set_xattrs_test ...[2024-06-11 12:51:49.485147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:30.803 [2024-06-11 12:51:49.485472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:30.803 passed 00:06:30.803 Test: blob_thin_prov_alloc ...passed 00:06:31.062 Test: blob_insert_cluster_msg_test ...passed 00:06:31.062 Test: blob_thin_prov_rw ...passed 00:06:31.062 Test: blob_thin_prov_rle ...passed 00:06:31.062 Test: blob_thin_prov_rw_iov ...passed 00:06:31.062 Test: blob_snapshot_rw ...passed 00:06:31.062 Test: blob_snapshot_rw_iov ...passed 00:06:31.321 Test: blob_inflate_rw ...passed 00:06:31.321 Test: blob_snapshot_freeze_io ...passed 00:06:31.580 Test: blob_operation_split_rw ...passed 00:06:31.580 Test: blob_operation_split_rw_iov ...passed 00:06:31.580 Test: blob_simultaneous_operations ...[2024-06-11 12:51:50.330645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:31.580 [2024-06-11 12:51:50.330961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:31.580 [2024-06-11 12:51:50.331435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:31.580 [2024-06-11 12:51:50.331557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:31.580 [2024-06-11 12:51:50.333959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:31.580 [2024-06-11 12:51:50.334148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:31.580 [2024-06-11 12:51:50.334281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:31.580 [2024-06-11 12:51:50.334382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:31.580 passed 00:06:31.580 Test: blob_persist_test ...passed 00:06:31.846 Test: blob_decouple_snapshot ...passed 00:06:31.846 Test: blob_seek_io_unit ...passed 00:06:31.846 Test: blob_nested_freezes ...passed 00:06:31.846 Suite: blob_blob_copy_noextent 00:06:31.846 Test: blob_write ...passed 00:06:31.846 Test: blob_read ...passed 00:06:31.846 Test: blob_rw_verify ...passed 00:06:31.846 Test: blob_rw_verify_iov_nomem ...passed 00:06:31.846 Test: blob_rw_iov_read_only ...passed 00:06:32.120 Test: blob_xattr ...passed 00:06:32.120 Test: blob_dirty_shutdown ...passed 00:06:32.120 Test: blob_is_degraded ...passed 00:06:32.120 Suite: blob_esnap_bs_copy_noextent 00:06:32.120 Test: blob_esnap_create ...passed 00:06:32.120 Test: blob_esnap_thread_add_remove ...passed 00:06:32.120 Test: blob_esnap_clone_snapshot ...passed 00:06:32.120 Test: blob_esnap_clone_inflate ...passed 00:06:32.120 Test: blob_esnap_clone_decouple ...passed 00:06:32.379 Test: blob_esnap_clone_reload ...passed 00:06:32.379 Test: blob_esnap_hotplug ...passed 00:06:32.379 Suite: blob_copy_extent 00:06:32.379 Test: blob_init ...[2024-06-11 12:51:50.997833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:32.379 passed 00:06:32.379 Test: blob_thin_provision ...passed 00:06:32.379 Test: blob_read_only ...passed 00:06:32.379 Test: bs_load ...[2024-06-11 12:51:51.043857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:32.379 passed 00:06:32.379 Test: bs_load_custom_cluster_size ...passed 00:06:32.379 Test: bs_load_after_failed_grow ...passed 00:06:32.379 Test: bs_cluster_sz ...[2024-06-11 12:51:51.068537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:32.379 [2024-06-11 12:51:51.068755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:32.379 [2024-06-11 12:51:51.068900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:32.379 passed 00:06:32.379 Test: bs_resize_md ...passed 00:06:32.379 Test: bs_destroy ...passed 00:06:32.379 Test: bs_type ...passed 00:06:32.379 Test: bs_super_block ...passed 00:06:32.379 Test: bs_test_recover_cluster_count ...passed 00:06:32.379 Test: bs_grow_live ...passed 00:06:32.379 Test: bs_grow_live_no_space ...passed 00:06:32.379 Test: bs_test_grow ...passed 00:06:32.379 Test: blob_serialize_test ...passed 00:06:32.379 Test: super_block_crc ...passed 00:06:32.379 Test: blob_thin_prov_write_count_io ...passed 00:06:32.379 Test: bs_load_iter_test ...passed 00:06:32.379 Test: blob_relations ...[2024-06-11 12:51:51.215462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.379 [2024-06-11 12:51:51.215771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.638 [2024-06-11 12:51:51.216835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.638 [2024-06-11 12:51:51.217014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.638 passed 00:06:32.638 Test: blob_relations2 ...[2024-06-11 12:51:51.231409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.638 [2024-06-11 12:51:51.231636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.638 [2024-06-11 12:51:51.231735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.638 [2024-06-11 12:51:51.231833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.638 [2024-06-11 12:51:51.233122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.638 [2024-06-11 12:51:51.233307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.638 [2024-06-11 12:51:51.233814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.638 [2024-06-11 12:51:51.234081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.638 passed 00:06:32.638 Test: blob_relations3 ...passed 00:06:32.638 Test: blobstore_clean_power_failure ...passed 00:06:32.638 Test: blob_delete_snapshot_power_failure ...[2024-06-11 12:51:51.384014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:32.638 [2024-06-11 12:51:51.396622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:32.638 [2024-06-11 12:51:51.409255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:32.639 [2024-06-11 12:51:51.409570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:32.639 [2024-06-11 12:51:51.409726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.639 [2024-06-11 12:51:51.425002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:32.639 [2024-06-11 12:51:51.425246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:32.639 [2024-06-11 12:51:51.425301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:32.639 [2024-06-11 12:51:51.425403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.639 [2024-06-11 12:51:51.438713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:32.639 [2024-06-11 12:51:51.438950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:32.639 [2024-06-11 12:51:51.439002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:32.639 [2024-06-11 12:51:51.439107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.639 [2024-06-11 12:51:51.451981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:32.639 [2024-06-11 12:51:51.452225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.639 [2024-06-11 12:51:51.464902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:32.639 [2024-06-11 12:51:51.465120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.898 [2024-06-11 12:51:51.479226] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:32.898 [2024-06-11 12:51:51.479422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.898 passed 00:06:32.898 Test: blob_create_snapshot_power_failure ...[2024-06-11 12:51:51.517844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:32.898 [2024-06-11 12:51:51.530447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:32.898 [2024-06-11 12:51:51.556019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:32.898 [2024-06-11 12:51:51.570304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:32.898 passed 00:06:32.898 Test: blob_io_unit ...passed 00:06:32.898 Test: blob_io_unit_compatibility ...passed 00:06:32.898 Test: blob_ext_md_pages ...passed 00:06:32.898 Test: blob_esnap_io_4096_4096 ...passed 00:06:32.898 Test: blob_esnap_io_512_512 ...passed 00:06:33.157 Test: blob_esnap_io_4096_512 ...passed 00:06:33.157 Test: blob_esnap_io_512_4096 ...passed 00:06:33.157 Suite: blob_bs_copy_extent 00:06:33.157 Test: blob_open ...passed 00:06:33.157 Test: blob_create ...[2024-06-11 12:51:51.817773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:33.157 passed 00:06:33.157 Test: blob_create_loop ...passed 00:06:33.157 Test: blob_create_fail ...[2024-06-11 12:51:51.918272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:33.157 passed 00:06:33.157 Test: blob_create_internal ...passed 00:06:33.415 Test: blob_create_zero_extent ...passed 00:06:33.415 Test: blob_snapshot ...passed 00:06:33.415 Test: blob_clone ...passed 00:06:33.415 Test: blob_inflate ...[2024-06-11 12:51:52.088589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:33.415 passed 00:06:33.415 Test: blob_delete ...passed 00:06:33.415 Test: blob_resize_test ...[2024-06-11 12:51:52.154376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:33.415 passed 00:06:33.415 Test: channel_ops ...passed 00:06:33.415 Test: blob_super ...passed 00:06:33.674 Test: blob_rw_verify_iov ...passed 00:06:33.674 Test: blob_unmap ...passed 00:06:33.674 Test: blob_iter ...passed 00:06:33.674 Test: blob_parse_md ...passed 00:06:33.674 Test: bs_load_pending_removal ...passed 00:06:33.674 Test: bs_unload ...[2024-06-11 12:51:52.421589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:33.674 passed 00:06:33.674 Test: bs_usable_clusters ...passed 00:06:33.674 Test: blob_crc ...[2024-06-11 12:51:52.490650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:33.674 [2024-06-11 12:51:52.490988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:33.674 passed 00:06:33.933 Test: blob_flags ...passed 00:06:33.933 Test: bs_version ...passed 00:06:33.933 Test: blob_set_xattrs_test ...[2024-06-11 12:51:52.598996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:33.933 [2024-06-11 12:51:52.599340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:33.933 passed 00:06:33.933 Test: blob_thin_prov_alloc ...passed 00:06:33.933 Test: blob_insert_cluster_msg_test ...passed 00:06:34.192 Test: blob_thin_prov_rw ...passed 00:06:34.192 Test: blob_thin_prov_rle ...passed 00:06:34.192 Test: blob_thin_prov_rw_iov ...passed 00:06:34.192 Test: blob_snapshot_rw ...passed 00:06:34.192 Test: blob_snapshot_rw_iov ...passed 00:06:34.451 Test: blob_inflate_rw ...passed 00:06:34.451 Test: blob_snapshot_freeze_io ...passed 00:06:34.710 Test: blob_operation_split_rw ...passed 00:06:34.710 Test: blob_operation_split_rw_iov ...passed 00:06:34.710 Test: blob_simultaneous_operations ...[2024-06-11 12:51:53.465651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:34.710 [2024-06-11 12:51:53.466026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:34.710 [2024-06-11 12:51:53.466487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:34.710 [2024-06-11 12:51:53.466650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:34.710 [2024-06-11 12:51:53.469048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:34.710 [2024-06-11 12:51:53.469221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:34.710 [2024-06-11 12:51:53.469371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:34.710 [2024-06-11 12:51:53.469568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:34.710 passed 00:06:34.710 Test: blob_persist_test ...passed 00:06:34.969 Test: blob_decouple_snapshot ...passed 00:06:34.969 Test: blob_seek_io_unit ...passed 00:06:34.969 Test: blob_nested_freezes ...passed 00:06:34.969 Suite: blob_blob_copy_extent 00:06:34.969 Test: blob_write ...passed 00:06:34.969 Test: blob_read ...passed 00:06:34.969 Test: blob_rw_verify ...passed 00:06:34.969 Test: blob_rw_verify_iov_nomem ...passed 00:06:34.969 Test: blob_rw_iov_read_only ...passed 00:06:35.228 Test: blob_xattr ...passed 00:06:35.228 Test: blob_dirty_shutdown ...passed 00:06:35.228 Test: blob_is_degraded ...passed 00:06:35.228 Suite: blob_esnap_bs_copy_extent 00:06:35.228 Test: blob_esnap_create ...passed 00:06:35.228 Test: blob_esnap_thread_add_remove ...passed 00:06:35.228 Test: blob_esnap_clone_snapshot ...passed 00:06:35.228 Test: blob_esnap_clone_inflate ...passed 00:06:35.487 Test: blob_esnap_clone_decouple ...passed 00:06:35.487 Test: blob_esnap_clone_reload ...passed 00:06:35.487 Test: blob_esnap_hotplug ...passed 00:06:35.487 00:06:35.487 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.487 suites 16 16 n/a 0 0 00:06:35.487 tests 348 348 348 0 0 00:06:35.487 asserts 92605 92605 92605 0 n/a 00:06:35.487 00:06:35.487 Elapsed time = 12.593 seconds 00:06:35.487 12:51:54 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:35.487 00:06:35.487 00:06:35.487 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.487 http://cunit.sourceforge.net/ 00:06:35.487 00:06:35.487 00:06:35.487 Suite: blob_bdev 00:06:35.487 Test: create_bs_dev ...passed 00:06:35.487 Test: create_bs_dev_ro ...[2024-06-11 12:51:54.231367] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:35.487 passed 00:06:35.487 Test: create_bs_dev_rw ...passed 00:06:35.487 Test: claim_bs_dev ...[2024-06-11 12:51:54.232377] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:35.487 passed 00:06:35.487 Test: claim_bs_dev_ro ...passed 00:06:35.487 Test: deferred_destroy_refs ...passed 00:06:35.487 Test: deferred_destroy_channels ...passed 00:06:35.487 Test: deferred_destroy_threads ...passed 00:06:35.487 00:06:35.487 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.487 suites 1 1 n/a 0 0 00:06:35.487 tests 8 8 8 0 0 00:06:35.487 asserts 119 119 119 0 n/a 00:06:35.487 00:06:35.487 Elapsed time = 0.001 seconds 00:06:35.487 12:51:54 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:35.487 00:06:35.487 00:06:35.487 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.487 http://cunit.sourceforge.net/ 00:06:35.487 00:06:35.487 00:06:35.487 Suite: tree 00:06:35.487 Test: blobfs_tree_op_test ...passed 00:06:35.487 00:06:35.487 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.487 suites 1 1 n/a 0 0 00:06:35.487 tests 1 1 1 0 0 00:06:35.487 asserts 27 27 27 0 n/a 00:06:35.487 00:06:35.487 Elapsed time = 0.000 seconds 00:06:35.487 12:51:54 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:35.487 00:06:35.487 00:06:35.487 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.487 http://cunit.sourceforge.net/ 00:06:35.487 00:06:35.487 00:06:35.487 Suite: blobfs_async_ut 00:06:35.746 Test: fs_init ...passed 00:06:35.746 Test: fs_open ...passed 00:06:35.746 Test: fs_create ...passed 00:06:35.746 Test: fs_truncate ...passed 00:06:35.746 Test: fs_rename ...[2024-06-11 12:51:54.435076] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:35.746 passed 00:06:35.746 Test: fs_rw_async ...passed 00:06:35.746 Test: fs_writev_readv_async ...passed 00:06:35.746 Test: tree_find_buffer_ut ...passed 00:06:35.746 Test: channel_ops ...passed 00:06:35.746 Test: channel_ops_sync ...passed 00:06:35.746 00:06:35.746 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.746 suites 1 1 n/a 0 0 00:06:35.746 tests 10 10 10 0 0 00:06:35.746 asserts 292 292 292 0 n/a 00:06:35.746 00:06:35.746 Elapsed time = 0.174 seconds 00:06:35.746 12:51:54 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:35.746 00:06:35.746 00:06:35.746 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.746 http://cunit.sourceforge.net/ 00:06:35.746 00:06:35.746 00:06:35.746 Suite: blobfs_sync_ut 00:06:36.005 Test: cache_read_after_write ...[2024-06-11 12:51:54.616735] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:36.005 passed 00:06:36.005 Test: file_length ...passed 00:06:36.005 Test: append_write_to_extend_blob ...passed 00:06:36.005 Test: partial_buffer ...passed 00:06:36.005 Test: cache_write_null_buffer ...passed 00:06:36.005 Test: fs_create_sync ...passed 00:06:36.005 Test: fs_rename_sync ...passed 00:06:36.005 Test: cache_append_no_cache ...passed 00:06:36.005 Test: fs_delete_file_without_close ...passed 00:06:36.005 00:06:36.005 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.005 suites 1 1 n/a 0 0 00:06:36.005 tests 9 9 9 0 0 00:06:36.005 asserts 345 345 345 0 n/a 00:06:36.005 00:06:36.005 Elapsed time = 0.379 seconds 00:06:36.005 12:51:54 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:36.005 00:06:36.005 00:06:36.005 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.005 http://cunit.sourceforge.net/ 00:06:36.005 00:06:36.005 00:06:36.005 Suite: blobfs_bdev_ut 00:06:36.005 Test: spdk_blobfs_bdev_detect_test ...[2024-06-11 12:51:54.812562] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:36.005 passed 00:06:36.005 Test: spdk_blobfs_bdev_create_test ...[2024-06-11 12:51:54.813513] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:36.005 passed 00:06:36.005 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:36.005 00:06:36.005 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.005 suites 1 1 n/a 0 0 00:06:36.005 tests 3 3 3 0 0 00:06:36.005 asserts 9 9 9 0 n/a 00:06:36.005 00:06:36.005 Elapsed time = 0.001 seconds 00:06:36.005 ************************************ 00:06:36.005 END TEST unittest_blob_blobfs 00:06:36.005 ************************************ 00:06:36.005 00:06:36.005 real 0m13.438s 00:06:36.005 user 0m12.852s 00:06:36.005 sys 0m0.657s 00:06:36.005 12:51:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.005 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:36.265 12:51:54 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:06:36.265 12:51:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.265 12:51:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.265 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:36.265 ************************************ 00:06:36.265 START TEST unittest_event 00:06:36.265 ************************************ 00:06:36.265 12:51:54 -- common/autotest_common.sh@1104 -- # unittest_event 00:06:36.265 12:51:54 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:36.265 00:06:36.265 00:06:36.265 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.265 http://cunit.sourceforge.net/ 00:06:36.265 00:06:36.265 00:06:36.265 Suite: app_suite 00:06:36.265 Test: test_spdk_app_parse_args ...app_ut: invalid option -- 'z' 00:06:36.265 app_ut [options] 00:06:36.265 options: 00:06:36.265 -c, --config JSON config file (default none) 00:06:36.265 --json JSON config file (default none) 00:06:36.265 --json-ignore-init-errors 00:06:36.265 don't exit on invalid config entry 00:06:36.265 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:36.265 -g, --single-file-segments 00:06:36.265 force creating just one hugetlbfs file 00:06:36.265 -h, --help show this usage 00:06:36.265 -i, --shm-id shared memory ID (optional) 00:06:36.265 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:36.265 --lcores lcore to CPU mapping list. The list is in the format: 00:06:36.265 [<,lcores[@CPUs]>...] 00:06:36.265 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:36.265 Within the group, '-' is used for range separator, 00:06:36.265 ',' is used for single number separator. 00:06:36.265 '( )' can be omitted for single element group, 00:06:36.265 '@' can be omitted if cpus and lcores have the same value 00:06:36.265 -n, --mem-channels channel number of memory channels used for DPDK 00:06:36.265 -p, --main-core main (primary) core for DPDK 00:06:36.265 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:36.265 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:36.265 --disable-cpumask-locks Disable CPU core lock files. 00:06:36.265 --silence-noticelog disable notice level logging to stderr 00:06:36.266 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:36.266 -u, --no-pci disable PCI access 00:06:36.266 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:36.266 --max-delay maximum reactor delay (in microseconds) 00:06:36.266 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:36.266 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:36.266 -R, --huge-unlink unlink huge files after initialization 00:06:36.266 -v, --version print SPDK version 00:06:36.266 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:36.266 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:36.266 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:36.266 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:36.266 Tracepoints vary in size and can use more than one trace entry. 00:06:36.266 --rpcs-allowed comma-separated list of permitted RPCS 00:06:36.266 --env-context Opaque context for use of the env implementation 00:06:36.266 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:36.266 --no-huge run without using hugepages 00:06:36.266 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:36.266 -e, --tpoint-group [:] 00:06:36.266 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:36.266 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:36.266 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:36.266 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:36.266 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:36.266 app_ut: unrecognized option '--test-long-opt' 00:06:36.266 app_ut [options] 00:06:36.266 options: 00:06:36.266 -c, --config JSON config file (default none) 00:06:36.266 --json JSON config file (default none) 00:06:36.266 --json-ignore-init-errors 00:06:36.266 don't exit on invalid config entry 00:06:36.266 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:36.266 -g, --single-file-segments 00:06:36.266 force creating just one hugetlbfs file 00:06:36.266 -h, --help show this usage 00:06:36.266 -i, --shm-id shared memory ID (optional) 00:06:36.266 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:36.266 --lcores lcore to CPU mapping list. The list is in the format: 00:06:36.266 [<,lcores[@CPUs]>...] 00:06:36.266 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:36.266 Within the group, '-' is used for range separator, 00:06:36.266 ',' is used for single number separator. 00:06:36.266 '( )' can be omitted for single element group, 00:06:36.266 '@' can be omitted if cpus and lcores have the same value 00:06:36.266 -n, --mem-channels channel number of memory channels used for DPDK 00:06:36.266 -p, --main-core main (primary) core for DPDK 00:06:36.266 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:36.266 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:36.266 --disable-cpumask-locks Disable CPU core lock files. 00:06:36.266 --silence-noticelog disable notice level logging to stderr 00:06:36.266 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:36.266 -u, --no-pci disable PCI access 00:06:36.266 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:36.266 --max-delay maximum reactor delay (in microseconds) 00:06:36.266 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:36.266 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:36.266 -R, --huge-unlink unlink huge files after initialization 00:06:36.266 -v, --version print SPDK version 00:06:36.266 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:36.266 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:36.266 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:36.266 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:36.266 Tracepoints vary in size and can use more than one trace entry. 00:06:36.266 --rpcs-allowed comma-separated list of permitted RPCS 00:06:36.266 --env-context Opaque context for use of the env implementation 00:06:36.266 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:36.266 --no-huge run without using hugepages 00:06:36.266 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:36.266 -e, --tpoint-group [:] 00:06:36.266 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:36.266 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:36.266 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:36.266 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:36.266 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:36.266 [2024-06-11 12:51:54.904736] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:36.266 [2024-06-11 12:51:54.905081] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:36.266 app_ut [options] 00:06:36.266 options: 00:06:36.266 -c, --config JSON config file (default none) 00:06:36.266 --json JSON config file (default none) 00:06:36.266 --json-ignore-init-errors 00:06:36.266 don't exit on invalid config entry 00:06:36.266 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:36.266 -g, --single-file-segments 00:06:36.266 force creating just one hugetlbfs file 00:06:36.266 -h, --help show this usage 00:06:36.266 -i, --shm-id shared memory ID (optional) 00:06:36.266 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:36.266 --lcores lcore to CPU mapping list. The list is in the format: 00:06:36.266 [<,lcores[@CPUs]>...] 00:06:36.266 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:36.266 Within the group, '-' is used for range separator, 00:06:36.266 ',' is used for single number separator. 00:06:36.266 '( )' can be omitted for single element group, 00:06:36.266 '@' can be omitted if cpus and lcores have the same value 00:06:36.266 -n, --mem-channels channel number of memory channels used for DPDK 00:06:36.266 -p, --main-core main (primary) core for DPDK 00:06:36.266 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:36.266 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:36.266 --disable-cpumask-locks Disable CPU core lock files. 00:06:36.266 --silence-noticelog disable notice level logging to stderr 00:06:36.266 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:36.266 -u, --no-pci disable PCI access 00:06:36.266 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:36.266 --max-delay maximum reactor delay (in microseconds) 00:06:36.266 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:36.266 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:36.266 -R, --huge-unlink unlink huge files after initialization 00:06:36.266 -v, --version print SPDK version 00:06:36.266 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:36.266 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:36.266 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:36.266 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:36.266 Tracepoints vary in size and can use more than one trace entry. 00:06:36.266 --rpcs-allowed comma-separated list of permitted RPCS 00:06:36.266 --env-context Opaque context for use of the env implementation 00:06:36.266 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:36.266 --no-huge run without using hugepages 00:06:36.266 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:36.266 -e, --tpoint-group [:] 00:06:36.266 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:36.266 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:36.266 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:36.266 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:36.266 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:36.266 [2024-06-11 12:51:54.908664] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:36.266 passed 00:06:36.266 00:06:36.266 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.267 suites 1 1 n/a 0 0 00:06:36.267 tests 1 1 1 0 0 00:06:36.267 asserts 8 8 8 0 n/a 00:06:36.267 00:06:36.267 Elapsed time = 0.002 seconds 00:06:36.267 12:51:54 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:36.267 00:06:36.267 00:06:36.267 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.267 http://cunit.sourceforge.net/ 00:06:36.267 00:06:36.267 00:06:36.267 Suite: app_suite 00:06:36.267 Test: test_create_reactor ...passed 00:06:36.267 Test: test_init_reactors ...passed 00:06:36.267 Test: test_event_call ...passed 00:06:36.267 Test: test_schedule_thread ...passed 00:06:36.267 Test: test_reschedule_thread ...passed 00:06:36.267 Test: test_bind_thread ...passed 00:06:36.267 Test: test_for_each_reactor ...passed 00:06:36.267 Test: test_reactor_stats ...passed 00:06:36.267 Test: test_scheduler ...passed 00:06:36.267 Test: test_governor ...passed 00:06:36.267 00:06:36.267 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.267 suites 1 1 n/a 0 0 00:06:36.267 tests 10 10 10 0 0 00:06:36.267 asserts 344 344 344 0 n/a 00:06:36.267 00:06:36.267 Elapsed time = 0.020 seconds 00:06:36.267 ************************************ 00:06:36.267 END TEST unittest_event 00:06:36.267 ************************************ 00:06:36.267 00:06:36.267 real 0m0.104s 00:06:36.267 user 0m0.043s 00:06:36.267 sys 0m0.046s 00:06:36.267 12:51:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.267 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:06:36.267 12:51:55 -- unit/unittest.sh@233 -- # uname -s 00:06:36.267 12:51:55 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:06:36.267 12:51:55 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:06:36.267 12:51:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.267 12:51:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.267 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:36.267 ************************************ 00:06:36.267 START TEST unittest_ftl 00:06:36.267 ************************************ 00:06:36.267 12:51:55 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:06:36.267 12:51:55 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:36.267 00:06:36.267 00:06:36.267 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.267 http://cunit.sourceforge.net/ 00:06:36.267 00:06:36.267 00:06:36.267 Suite: ftl_band_suite 00:06:36.267 Test: test_band_block_offset_from_addr_base ...passed 00:06:36.525 Test: test_band_block_offset_from_addr_offset ...passed 00:06:36.525 Test: test_band_addr_from_block_offset ...passed 00:06:36.525 Test: test_band_set_addr ...passed 00:06:36.525 Test: test_invalidate_addr ...passed 00:06:36.525 Test: test_next_xfer_addr ...passed 00:06:36.525 00:06:36.525 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.525 suites 1 1 n/a 0 0 00:06:36.525 tests 6 6 6 0 0 00:06:36.525 asserts 30356 30356 30356 0 n/a 00:06:36.525 00:06:36.525 Elapsed time = 0.176 seconds 00:06:36.525 12:51:55 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:36.525 00:06:36.525 00:06:36.525 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.525 http://cunit.sourceforge.net/ 00:06:36.525 00:06:36.525 00:06:36.525 Suite: ftl_bitmap 00:06:36.525 Test: test_ftl_bitmap_create ...[2024-06-11 12:51:55.302983] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:36.525 [2024-06-11 12:51:55.303457] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:36.525 passed 00:06:36.525 Test: test_ftl_bitmap_get ...passed 00:06:36.525 Test: test_ftl_bitmap_set ...passed 00:06:36.526 Test: test_ftl_bitmap_clear ...passed 00:06:36.526 Test: test_ftl_bitmap_find_first_set ...passed 00:06:36.526 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:36.526 Test: test_ftl_bitmap_count_set ...passed 00:06:36.526 00:06:36.526 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.526 suites 1 1 n/a 0 0 00:06:36.526 tests 7 7 7 0 0 00:06:36.526 asserts 137 137 137 0 n/a 00:06:36.526 00:06:36.526 Elapsed time = 0.001 seconds 00:06:36.526 12:51:55 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:36.526 00:06:36.526 00:06:36.526 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.526 http://cunit.sourceforge.net/ 00:06:36.526 00:06:36.526 00:06:36.526 Suite: ftl_io_suite 00:06:36.526 Test: test_completion ...passed 00:06:36.526 Test: test_multiple_ios ...passed 00:06:36.526 00:06:36.526 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.526 suites 1 1 n/a 0 0 00:06:36.526 tests 2 2 2 0 0 00:06:36.526 asserts 47 47 47 0 n/a 00:06:36.526 00:06:36.526 Elapsed time = 0.003 seconds 00:06:36.526 12:51:55 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:36.785 00:06:36.785 00:06:36.785 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.785 http://cunit.sourceforge.net/ 00:06:36.785 00:06:36.785 00:06:36.785 Suite: ftl_mngt 00:06:36.785 Test: test_next_step ...passed 00:06:36.785 Test: test_continue_step ...passed 00:06:36.785 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:36.785 Test: test_fail_step ...passed 00:06:36.785 Test: test_mngt_call_and_call_rollback ...passed 00:06:36.785 Test: test_nested_process_failure ...passed 00:06:36.785 00:06:36.785 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.785 suites 1 1 n/a 0 0 00:06:36.785 tests 6 6 6 0 0 00:06:36.785 asserts 176 176 176 0 n/a 00:06:36.785 00:06:36.785 Elapsed time = 0.002 seconds 00:06:36.785 12:51:55 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:36.785 00:06:36.785 00:06:36.785 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.785 http://cunit.sourceforge.net/ 00:06:36.785 00:06:36.785 00:06:36.785 Suite: ftl_mempool 00:06:36.785 Test: test_ftl_mempool_create ...passed 00:06:36.785 Test: test_ftl_mempool_get_put ...passed 00:06:36.785 00:06:36.785 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.785 suites 1 1 n/a 0 0 00:06:36.785 tests 2 2 2 0 0 00:06:36.785 asserts 36 36 36 0 n/a 00:06:36.785 00:06:36.785 Elapsed time = 0.000 seconds 00:06:36.785 12:51:55 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:36.785 00:06:36.785 00:06:36.785 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.785 http://cunit.sourceforge.net/ 00:06:36.785 00:06:36.785 00:06:36.785 Suite: ftl_addr64_suite 00:06:36.785 Test: test_addr_cached ...passed 00:06:36.785 00:06:36.785 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.785 suites 1 1 n/a 0 0 00:06:36.785 tests 1 1 1 0 0 00:06:36.785 asserts 1536 1536 1536 0 n/a 00:06:36.785 00:06:36.785 Elapsed time = 0.000 seconds 00:06:36.785 12:51:55 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:36.785 00:06:36.785 00:06:36.785 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.785 http://cunit.sourceforge.net/ 00:06:36.785 00:06:36.785 00:06:36.785 Suite: ftl_sb 00:06:36.785 Test: test_sb_crc_v2 ...passed 00:06:36.785 Test: test_sb_crc_v3 ...passed 00:06:36.785 Test: test_sb_v3_md_layout ...[2024-06-11 12:51:55.462691] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:36.785 [2024-06-11 12:51:55.463242] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:36.785 [2024-06-11 12:51:55.463446] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:36.785 [2024-06-11 12:51:55.463661] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:36.785 [2024-06-11 12:51:55.463830] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:36.785 [2024-06-11 12:51:55.464055] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:36.785 [2024-06-11 12:51:55.464233] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:36.785 [2024-06-11 12:51:55.464452] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:36.785 [2024-06-11 12:51:55.464645] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:36.785 [2024-06-11 12:51:55.464792] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:36.785 [2024-06-11 12:51:55.464962] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:36.785 passed 00:06:36.785 Test: test_sb_v5_md_layout ...passed 00:06:36.785 00:06:36.785 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.785 suites 1 1 n/a 0 0 00:06:36.785 tests 4 4 4 0 0 00:06:36.785 asserts 148 148 148 0 n/a 00:06:36.785 00:06:36.785 Elapsed time = 0.003 seconds 00:06:36.785 12:51:55 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:36.785 00:06:36.785 00:06:36.785 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.785 http://cunit.sourceforge.net/ 00:06:36.785 00:06:36.785 00:06:36.785 Suite: ftl_layout_upgrade 00:06:36.785 Test: test_l2p_upgrade ...passed 00:06:36.785 00:06:36.785 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.785 suites 1 1 n/a 0 0 00:06:36.785 tests 1 1 1 0 0 00:06:36.785 asserts 140 140 140 0 n/a 00:06:36.785 00:06:36.785 Elapsed time = 0.001 seconds 00:06:36.785 ************************************ 00:06:36.785 END TEST unittest_ftl 00:06:36.785 ************************************ 00:06:36.785 00:06:36.785 real 0m0.481s 00:06:36.785 user 0m0.231s 00:06:36.785 sys 0m0.238s 00:06:36.785 12:51:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.785 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:36.785 12:51:55 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:36.785 12:51:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.785 12:51:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.785 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:36.785 ************************************ 00:06:36.785 START TEST unittest_accel 00:06:36.785 ************************************ 00:06:36.785 12:51:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:36.785 00:06:36.785 00:06:36.785 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.785 http://cunit.sourceforge.net/ 00:06:36.785 00:06:36.785 00:06:36.785 Suite: accel_sequence 00:06:36.785 Test: test_sequence_fill_copy ...passed 00:06:36.785 Test: test_sequence_abort ...passed 00:06:36.785 Test: test_sequence_append_error ...passed 00:06:36.785 Test: test_sequence_completion_error ...[2024-06-11 12:51:55.584749] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f92218c27c0 00:06:36.785 [2024-06-11 12:51:55.585163] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f92218c27c0 00:06:36.785 [2024-06-11 12:51:55.585321] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f92218c27c0 00:06:36.785 [2024-06-11 12:51:55.585507] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f92218c27c0 00:06:36.785 passed 00:06:36.785 Test: test_sequence_decompress ...passed 00:06:36.785 Test: test_sequence_reverse ...passed 00:06:36.785 Test: test_sequence_copy_elision ...passed 00:06:36.785 Test: test_sequence_accel_buffers ...passed 00:06:36.785 Test: test_sequence_memory_domain ...[2024-06-11 12:51:55.598031] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:36.785 [2024-06-11 12:51:55.598332] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:36.785 passed 00:06:36.785 Test: test_sequence_module_memory_domain ...passed 00:06:36.785 Test: test_sequence_crypto ...passed 00:06:36.785 Test: test_sequence_driver ...[2024-06-11 12:51:55.605744] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f9220c9a7c0 using driver: ut 00:06:36.785 [2024-06-11 12:51:55.605965] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f9220c9a7c0 through driver: ut 00:06:36.785 passed 00:06:36.785 Test: test_sequence_same_iovs ...passed 00:06:36.785 Test: test_sequence_crc32 ...passed 00:06:36.785 Suite: accel 00:06:36.785 Test: test_spdk_accel_task_complete ...passed 00:06:36.785 Test: test_get_task ...passed 00:06:36.785 Test: test_spdk_accel_submit_copy ...passed 00:06:36.785 Test: test_spdk_accel_submit_dualcast ...[2024-06-11 12:51:55.612094] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:36.785 [2024-06-11 12:51:55.612248] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:36.785 passed 00:06:36.786 Test: test_spdk_accel_submit_compare ...passed 00:06:36.786 Test: test_spdk_accel_submit_fill ...passed 00:06:36.786 Test: test_spdk_accel_submit_crc32c ...passed 00:06:36.786 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:36.786 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:36.786 Test: test_spdk_accel_submit_xor ...passed 00:06:36.786 Test: test_spdk_accel_module_find_by_name ...passed 00:06:36.786 Test: test_spdk_accel_module_register ...passed 00:06:36.786 00:06:36.786 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.786 suites 2 2 n/a 0 0 00:06:36.786 tests 26 26 26 0 0 00:06:36.786 asserts 831 831 831 0 n/a 00:06:36.786 00:06:36.786 Elapsed time = 0.036 seconds 00:06:37.044 ************************************ 00:06:37.044 END TEST unittest_accel 00:06:37.044 ************************************ 00:06:37.044 00:06:37.044 real 0m0.077s 00:06:37.044 user 0m0.030s 00:06:37.044 sys 0m0.042s 00:06:37.044 12:51:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.044 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:37.044 12:51:55 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:37.044 12:51:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.044 12:51:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.044 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:37.044 ************************************ 00:06:37.044 START TEST unittest_ioat 00:06:37.044 ************************************ 00:06:37.044 12:51:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:37.044 00:06:37.044 00:06:37.044 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.044 http://cunit.sourceforge.net/ 00:06:37.044 00:06:37.044 00:06:37.044 Suite: ioat 00:06:37.044 Test: ioat_state_check ...passed 00:06:37.044 00:06:37.044 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.044 suites 1 1 n/a 0 0 00:06:37.044 tests 1 1 1 0 0 00:06:37.044 asserts 32 32 32 0 n/a 00:06:37.044 00:06:37.045 Elapsed time = 0.000 seconds 00:06:37.045 ************************************ 00:06:37.045 END TEST unittest_ioat 00:06:37.045 ************************************ 00:06:37.045 00:06:37.045 real 0m0.026s 00:06:37.045 user 0m0.026s 00:06:37.045 sys 0m0.000s 00:06:37.045 12:51:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.045 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:37.045 12:51:55 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:37.045 12:51:55 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:37.045 12:51:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.045 12:51:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.045 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:37.045 ************************************ 00:06:37.045 START TEST unittest_idxd_user 00:06:37.045 ************************************ 00:06:37.045 12:51:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:37.045 00:06:37.045 00:06:37.045 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.045 http://cunit.sourceforge.net/ 00:06:37.045 00:06:37.045 00:06:37.045 Suite: idxd_user 00:06:37.045 Test: test_idxd_wait_cmd ...[2024-06-11 12:51:55.767446] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:37.045 [2024-06-11 12:51:55.768633] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:37.045 passed 00:06:37.045 Test: test_idxd_reset_dev ...[2024-06-11 12:51:55.769558] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:37.045 [2024-06-11 12:51:55.770002] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:37.045 passed 00:06:37.045 Test: test_idxd_group_config ...passed 00:06:37.045 Test: test_idxd_wq_config ...passed 00:06:37.045 00:06:37.045 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.045 suites 1 1 n/a 0 0 00:06:37.045 tests 4 4 4 0 0 00:06:37.045 asserts 20 20 20 0 n/a 00:06:37.045 00:06:37.045 Elapsed time = 0.001 seconds 00:06:37.045 00:06:37.045 real 0m0.032s 00:06:37.045 user 0m0.025s 00:06:37.045 sys 0m0.004s 00:06:37.045 12:51:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.045 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:37.045 ************************************ 00:06:37.045 END TEST unittest_idxd_user 00:06:37.045 ************************************ 00:06:37.045 12:51:55 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:06:37.045 12:51:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.045 12:51:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.045 12:51:55 -- common/autotest_common.sh@10 -- # set +x 00:06:37.045 ************************************ 00:06:37.045 START TEST unittest_iscsi 00:06:37.045 ************************************ 00:06:37.045 12:51:55 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:06:37.045 12:51:55 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:37.045 00:06:37.045 00:06:37.045 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.045 http://cunit.sourceforge.net/ 00:06:37.045 00:06:37.045 00:06:37.045 Suite: conn_suite 00:06:37.045 Test: read_task_split_in_order_case ...passed 00:06:37.045 Test: read_task_split_reverse_order_case ...passed 00:06:37.045 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:37.045 Test: process_non_read_task_completion_test ...passed 00:06:37.045 Test: free_tasks_on_connection ...passed 00:06:37.045 Test: free_tasks_with_queued_datain ...passed 00:06:37.045 Test: abort_queued_datain_task_test ...passed 00:06:37.045 Test: abort_queued_datain_tasks_test ...passed 00:06:37.045 00:06:37.045 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.045 suites 1 1 n/a 0 0 00:06:37.045 tests 8 8 8 0 0 00:06:37.045 asserts 230 230 230 0 n/a 00:06:37.045 00:06:37.045 Elapsed time = 0.000 seconds 00:06:37.045 12:51:55 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:37.304 00:06:37.304 00:06:37.304 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.304 http://cunit.sourceforge.net/ 00:06:37.304 00:06:37.304 00:06:37.304 Suite: iscsi_suite 00:06:37.304 Test: param_negotiation_test ...passed 00:06:37.304 Test: list_negotiation_test ...passed 00:06:37.304 Test: parse_valid_test ...passed 00:06:37.304 Test: parse_invalid_test ...[2024-06-11 12:51:55.886957] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:37.304 [2024-06-11 12:51:55.887348] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:37.304 [2024-06-11 12:51:55.887510] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:06:37.304 [2024-06-11 12:51:55.887662] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:37.304 [2024-06-11 12:51:55.887914] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:37.304 [2024-06-11 12:51:55.888084] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:37.304 [2024-06-11 12:51:55.888297] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:37.304 passed 00:06:37.304 00:06:37.304 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.304 suites 1 1 n/a 0 0 00:06:37.304 tests 4 4 4 0 0 00:06:37.304 asserts 161 161 161 0 n/a 00:06:37.304 00:06:37.304 Elapsed time = 0.005 seconds 00:06:37.304 12:51:55 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:37.304 00:06:37.304 00:06:37.304 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.304 http://cunit.sourceforge.net/ 00:06:37.304 00:06:37.304 00:06:37.304 Suite: iscsi_target_node_suite 00:06:37.304 Test: add_lun_test_cases ...[2024-06-11 12:51:55.919409] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:37.304 [2024-06-11 12:51:55.919901] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:37.304 [2024-06-11 12:51:55.920153] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:37.304 [2024-06-11 12:51:55.920324] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:37.304 [2024-06-11 12:51:55.920528] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:37.304 passed 00:06:37.304 Test: allow_any_allowed ...passed 00:06:37.304 Test: allow_ipv6_allowed ...passed 00:06:37.304 Test: allow_ipv6_denied ...passed 00:06:37.304 Test: allow_ipv6_invalid ...passed 00:06:37.305 Test: allow_ipv4_allowed ...passed 00:06:37.305 Test: allow_ipv4_denied ...passed 00:06:37.305 Test: allow_ipv4_invalid ...passed 00:06:37.305 Test: node_access_allowed ...passed 00:06:37.305 Test: node_access_denied_by_empty_netmask ...passed 00:06:37.305 Test: node_access_multi_initiator_groups_cases ...passed 00:06:37.305 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:37.305 Test: chap_param_test_cases ...[2024-06-11 12:51:55.922801] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:37.305 [2024-06-11 12:51:55.922968] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:37.305 [2024-06-11 12:51:55.923120] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:37.305 [2024-06-11 12:51:55.923175] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:37.305 [2024-06-11 12:51:55.923267] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:37.305 passed 00:06:37.305 00:06:37.305 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.305 suites 1 1 n/a 0 0 00:06:37.305 tests 13 13 13 0 0 00:06:37.305 asserts 50 50 50 0 n/a 00:06:37.305 00:06:37.305 Elapsed time = 0.002 seconds 00:06:37.305 12:51:55 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:37.305 00:06:37.305 00:06:37.305 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.305 http://cunit.sourceforge.net/ 00:06:37.305 00:06:37.305 00:06:37.305 Suite: iscsi_suite 00:06:37.305 Test: op_login_check_target_test ...[2024-06-11 12:51:55.958872] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:37.305 passed 00:06:37.305 Test: op_login_session_normal_test ...[2024-06-11 12:51:55.959474] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:37.305 [2024-06-11 12:51:55.959633] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:37.305 [2024-06-11 12:51:55.959760] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:37.305 [2024-06-11 12:51:55.959919] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:37.305 [2024-06-11 12:51:55.960124] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:37.305 [2024-06-11 12:51:55.960333] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:37.305 [2024-06-11 12:51:55.960490] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:37.305 passed 00:06:37.305 Test: maxburstlength_test ...[2024-06-11 12:51:55.960976] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:37.305 [2024-06-11 12:51:55.961150] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:37.305 passed 00:06:37.305 Test: underflow_for_read_transfer_test ...passed 00:06:37.305 Test: underflow_for_zero_read_transfer_test ...passed 00:06:37.305 Test: underflow_for_request_sense_test ...passed 00:06:37.305 Test: underflow_for_check_condition_test ...passed 00:06:37.305 Test: add_transfer_task_test ...passed 00:06:37.305 Test: get_transfer_task_test ...passed 00:06:37.305 Test: del_transfer_task_test ...passed 00:06:37.305 Test: clear_all_transfer_tasks_test ...passed 00:06:37.305 Test: build_iovs_test ...passed 00:06:37.305 Test: build_iovs_with_md_test ...passed 00:06:37.305 Test: pdu_hdr_op_login_test ...[2024-06-11 12:51:55.964544] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:37.305 [2024-06-11 12:51:55.964757] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:37.305 [2024-06-11 12:51:55.964937] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:37.305 passed 00:06:37.305 Test: pdu_hdr_op_text_test ...[2024-06-11 12:51:55.965306] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:37.305 [2024-06-11 12:51:55.965498] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:37.305 [2024-06-11 12:51:55.965634] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:37.305 passed 00:06:37.305 Test: pdu_hdr_op_logout_test ...[2024-06-11 12:51:55.966029] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:37.305 passed 00:06:37.305 Test: pdu_hdr_op_scsi_test ...[2024-06-11 12:51:55.966428] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:37.305 [2024-06-11 12:51:55.966503] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:37.305 [2024-06-11 12:51:55.966677] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:37.305 [2024-06-11 12:51:55.966876] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:37.305 [2024-06-11 12:51:55.967080] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:37.305 [2024-06-11 12:51:55.967384] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:37.305 passed 00:06:37.305 Test: pdu_hdr_op_task_mgmt_test ...[2024-06-11 12:51:55.967744] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:37.305 [2024-06-11 12:51:55.967928] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:37.305 passed 00:06:37.305 Test: pdu_hdr_op_nopout_test ...[2024-06-11 12:51:55.968406] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:37.305 [2024-06-11 12:51:55.968634] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:37.305 [2024-06-11 12:51:55.968777] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:37.305 [2024-06-11 12:51:55.968902] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:37.305 passed 00:06:37.305 Test: pdu_hdr_op_data_test ...[2024-06-11 12:51:55.969251] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:37.305 [2024-06-11 12:51:55.969360] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:37.305 [2024-06-11 12:51:55.969470] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:37.305 [2024-06-11 12:51:55.969598] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:37.305 [2024-06-11 12:51:55.969776] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:37.305 [2024-06-11 12:51:55.969983] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:37.305 [2024-06-11 12:51:55.970132] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:37.305 passed 00:06:37.305 Test: empty_text_with_cbit_test ...passed 00:06:37.305 Test: pdu_payload_read_test ...[2024-06-11 12:51:55.972705] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:37.305 passed 00:06:37.305 Test: data_out_pdu_sequence_test ...passed 00:06:37.305 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:37.305 00:06:37.305 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.305 suites 1 1 n/a 0 0 00:06:37.305 tests 24 24 24 0 0 00:06:37.305 asserts 150253 150253 150253 0 n/a 00:06:37.305 00:06:37.305 Elapsed time = 0.018 seconds 00:06:37.305 12:51:55 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:37.305 00:06:37.305 00:06:37.305 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.305 http://cunit.sourceforge.net/ 00:06:37.305 00:06:37.305 00:06:37.305 Suite: init_grp_suite 00:06:37.305 Test: create_initiator_group_success_case ...passed 00:06:37.305 Test: find_initiator_group_success_case ...passed 00:06:37.305 Test: register_initiator_group_twice_case ...passed 00:06:37.305 Test: add_initiator_name_success_case ...passed 00:06:37.305 Test: add_initiator_name_fail_case ...[2024-06-11 12:51:56.017945] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:37.305 passed 00:06:37.305 Test: delete_all_initiator_names_success_case ...passed 00:06:37.305 Test: add_netmask_success_case ...passed 00:06:37.305 Test: add_netmask_fail_case ...[2024-06-11 12:51:56.018958] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:37.305 passed 00:06:37.305 Test: delete_all_netmasks_success_case ...passed 00:06:37.305 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:37.305 Test: netmask_overwrite_all_to_any_case ...passed 00:06:37.305 Test: add_delete_initiator_names_case ...passed 00:06:37.305 Test: add_duplicated_initiator_names_case ...passed 00:06:37.305 Test: delete_nonexisting_initiator_names_case ...passed 00:06:37.305 Test: add_delete_netmasks_case ...passed 00:06:37.305 Test: add_duplicated_netmasks_case ...passed 00:06:37.305 Test: delete_nonexisting_netmasks_case ...passed 00:06:37.305 00:06:37.305 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.305 suites 1 1 n/a 0 0 00:06:37.305 tests 17 17 17 0 0 00:06:37.305 asserts 108 108 108 0 n/a 00:06:37.305 00:06:37.305 Elapsed time = 0.002 seconds 00:06:37.305 12:51:56 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:37.305 00:06:37.305 00:06:37.305 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.306 http://cunit.sourceforge.net/ 00:06:37.306 00:06:37.306 00:06:37.306 Suite: portal_grp_suite 00:06:37.306 Test: portal_create_ipv4_normal_case ...passed 00:06:37.306 Test: portal_create_ipv6_normal_case ...passed 00:06:37.306 Test: portal_create_ipv4_wildcard_case ...passed 00:06:37.306 Test: portal_create_ipv6_wildcard_case ...passed 00:06:37.306 Test: portal_create_twice_case ...[2024-06-11 12:51:56.052145] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:37.306 passed 00:06:37.306 Test: portal_grp_register_unregister_case ...passed 00:06:37.306 Test: portal_grp_register_twice_case ...passed 00:06:37.306 Test: portal_grp_add_delete_case ...passed 00:06:37.306 Test: portal_grp_add_delete_twice_case ...passed 00:06:37.306 00:06:37.306 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.306 suites 1 1 n/a 0 0 00:06:37.306 tests 9 9 9 0 0 00:06:37.306 asserts 44 44 44 0 n/a 00:06:37.306 00:06:37.306 Elapsed time = 0.004 seconds 00:06:37.306 00:06:37.306 real 0m0.239s 00:06:37.306 user 0m0.129s 00:06:37.306 sys 0m0.093s 00:06:37.306 12:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.306 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.306 ************************************ 00:06:37.306 END TEST unittest_iscsi 00:06:37.306 ************************************ 00:06:37.306 12:51:56 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:06:37.306 12:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.306 12:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.306 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.306 ************************************ 00:06:37.306 START TEST unittest_json 00:06:37.306 ************************************ 00:06:37.306 12:51:56 -- common/autotest_common.sh@1104 -- # unittest_json 00:06:37.306 12:51:56 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:37.564 00:06:37.564 00:06:37.564 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.564 http://cunit.sourceforge.net/ 00:06:37.564 00:06:37.564 00:06:37.564 Suite: json 00:06:37.564 Test: test_parse_literal ...passed 00:06:37.564 Test: test_parse_string_simple ...passed 00:06:37.564 Test: test_parse_string_control_chars ...passed 00:06:37.564 Test: test_parse_string_utf8 ...passed 00:06:37.564 Test: test_parse_string_escapes_twochar ...passed 00:06:37.565 Test: test_parse_string_escapes_unicode ...passed 00:06:37.565 Test: test_parse_number ...passed 00:06:37.565 Test: test_parse_array ...passed 00:06:37.565 Test: test_parse_object ...passed 00:06:37.565 Test: test_parse_nesting ...passed 00:06:37.565 Test: test_parse_comment ...passed 00:06:37.565 00:06:37.565 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.565 suites 1 1 n/a 0 0 00:06:37.565 tests 11 11 11 0 0 00:06:37.565 asserts 1516 1516 1516 0 n/a 00:06:37.565 00:06:37.565 Elapsed time = 0.001 seconds 00:06:37.565 12:51:56 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:37.565 00:06:37.565 00:06:37.565 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.565 http://cunit.sourceforge.net/ 00:06:37.565 00:06:37.565 00:06:37.565 Suite: json 00:06:37.565 Test: test_strequal ...passed 00:06:37.565 Test: test_num_to_uint16 ...passed 00:06:37.565 Test: test_num_to_int32 ...passed 00:06:37.565 Test: test_num_to_uint64 ...passed 00:06:37.565 Test: test_decode_object ...passed 00:06:37.565 Test: test_decode_array ...passed 00:06:37.565 Test: test_decode_bool ...passed 00:06:37.565 Test: test_decode_uint16 ...passed 00:06:37.565 Test: test_decode_int32 ...passed 00:06:37.565 Test: test_decode_uint32 ...passed 00:06:37.565 Test: test_decode_uint64 ...passed 00:06:37.565 Test: test_decode_string ...passed 00:06:37.565 Test: test_decode_uuid ...passed 00:06:37.565 Test: test_find ...passed 00:06:37.565 Test: test_find_array ...passed 00:06:37.565 Test: test_iterating ...passed 00:06:37.565 Test: test_free_object ...passed 00:06:37.565 00:06:37.565 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.565 suites 1 1 n/a 0 0 00:06:37.565 tests 17 17 17 0 0 00:06:37.565 asserts 236 236 236 0 n/a 00:06:37.565 00:06:37.565 Elapsed time = 0.001 seconds 00:06:37.565 12:51:56 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:37.565 00:06:37.565 00:06:37.565 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.565 http://cunit.sourceforge.net/ 00:06:37.565 00:06:37.565 00:06:37.565 Suite: json 00:06:37.565 Test: test_write_literal ...passed 00:06:37.565 Test: test_write_string_simple ...passed 00:06:37.565 Test: test_write_string_escapes ...passed 00:06:37.565 Test: test_write_string_utf16le ...passed 00:06:37.565 Test: test_write_number_int32 ...passed 00:06:37.565 Test: test_write_number_uint32 ...passed 00:06:37.565 Test: test_write_number_uint128 ...passed 00:06:37.565 Test: test_write_string_number_uint128 ...passed 00:06:37.565 Test: test_write_number_int64 ...passed 00:06:37.565 Test: test_write_number_uint64 ...passed 00:06:37.565 Test: test_write_number_double ...passed 00:06:37.565 Test: test_write_uuid ...passed 00:06:37.565 Test: test_write_array ...passed 00:06:37.565 Test: test_write_object ...passed 00:06:37.565 Test: test_write_nesting ...passed 00:06:37.565 Test: test_write_val ...passed 00:06:37.565 00:06:37.565 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.565 suites 1 1 n/a 0 0 00:06:37.565 tests 16 16 16 0 0 00:06:37.565 asserts 918 918 918 0 n/a 00:06:37.565 00:06:37.565 Elapsed time = 0.004 seconds 00:06:37.565 12:51:56 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:37.565 00:06:37.565 00:06:37.565 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.565 http://cunit.sourceforge.net/ 00:06:37.565 00:06:37.565 00:06:37.565 Suite: jsonrpc 00:06:37.565 Test: test_parse_request ...passed 00:06:37.565 Test: test_parse_request_streaming ...passed 00:06:37.565 00:06:37.565 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.565 suites 1 1 n/a 0 0 00:06:37.565 tests 2 2 2 0 0 00:06:37.565 asserts 289 289 289 0 n/a 00:06:37.565 00:06:37.565 Elapsed time = 0.004 seconds 00:06:37.565 00:06:37.565 real 0m0.138s 00:06:37.565 user 0m0.084s 00:06:37.565 sys 0m0.047s 00:06:37.565 12:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.565 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.565 ************************************ 00:06:37.565 END TEST unittest_json 00:06:37.565 ************************************ 00:06:37.565 12:51:56 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:06:37.565 12:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.565 12:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.565 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.565 ************************************ 00:06:37.565 START TEST unittest_rpc 00:06:37.565 ************************************ 00:06:37.565 12:51:56 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:06:37.565 12:51:56 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:37.565 00:06:37.565 00:06:37.565 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.565 http://cunit.sourceforge.net/ 00:06:37.565 00:06:37.565 00:06:37.565 Suite: rpc 00:06:37.565 Test: test_jsonrpc_handler ...passed 00:06:37.565 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:37.565 Test: test_rpc_get_methods ...[2024-06-11 12:51:56.324598] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:37.565 passed 00:06:37.565 Test: test_rpc_spdk_get_version ...passed 00:06:37.565 Test: test_spdk_rpc_listen_close ...passed 00:06:37.565 00:06:37.565 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.565 suites 1 1 n/a 0 0 00:06:37.565 tests 5 5 5 0 0 00:06:37.565 asserts 20 20 20 0 n/a 00:06:37.565 00:06:37.565 Elapsed time = 0.000 seconds 00:06:37.565 00:06:37.565 real 0m0.030s 00:06:37.565 user 0m0.025s 00:06:37.565 sys 0m0.004s 00:06:37.565 12:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.565 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.565 ************************************ 00:06:37.565 END TEST unittest_rpc 00:06:37.565 ************************************ 00:06:37.565 12:51:56 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:37.565 12:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.565 12:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.565 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.565 ************************************ 00:06:37.565 START TEST unittest_notify 00:06:37.565 ************************************ 00:06:37.565 12:51:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:37.824 00:06:37.824 00:06:37.824 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.824 http://cunit.sourceforge.net/ 00:06:37.824 00:06:37.824 00:06:37.824 Suite: app_suite 00:06:37.824 Test: notify ...passed 00:06:37.824 00:06:37.824 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.824 suites 1 1 n/a 0 0 00:06:37.824 tests 1 1 1 0 0 00:06:37.824 asserts 13 13 13 0 n/a 00:06:37.824 00:06:37.824 Elapsed time = 0.000 seconds 00:06:37.824 00:06:37.824 real 0m0.030s 00:06:37.824 user 0m0.024s 00:06:37.824 sys 0m0.006s 00:06:37.824 12:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.824 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.824 ************************************ 00:06:37.824 END TEST unittest_notify 00:06:37.824 ************************************ 00:06:37.824 12:51:56 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:06:37.824 12:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.824 12:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.824 12:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.824 ************************************ 00:06:37.824 START TEST unittest_nvme 00:06:37.824 ************************************ 00:06:37.824 12:51:56 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:06:37.824 12:51:56 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:37.824 00:06:37.824 00:06:37.824 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.824 http://cunit.sourceforge.net/ 00:06:37.824 00:06:37.824 00:06:37.824 Suite: nvme 00:06:37.824 Test: test_opc_data_transfer ...passed 00:06:37.824 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:37.824 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:37.824 Test: test_trid_parse_and_compare ...[2024-06-11 12:51:56.484873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:37.824 [2024-06-11 12:51:56.485240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:37.824 [2024-06-11 12:51:56.485413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:37.824 [2024-06-11 12:51:56.485575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:37.824 [2024-06-11 12:51:56.485714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:37.824 [2024-06-11 12:51:56.485886] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:37.824 passed 00:06:37.824 Test: test_trid_trtype_str ...passed 00:06:37.824 Test: test_trid_adrfam_str ...passed 00:06:37.824 Test: test_nvme_ctrlr_probe ...[2024-06-11 12:51:56.486532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:37.824 passed 00:06:37.824 Test: test_spdk_nvme_probe ...[2024-06-11 12:51:56.486893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:37.824 [2024-06-11 12:51:56.487024] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:37.824 [2024-06-11 12:51:56.487232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:37.824 [2024-06-11 12:51:56.487379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:37.824 passed 00:06:37.824 Test: test_spdk_nvme_connect ...[2024-06-11 12:51:56.487772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:37.824 [2024-06-11 12:51:56.488186] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:37.824 [2024-06-11 12:51:56.488373] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:37.824 passed 00:06:37.825 Test: test_nvme_ctrlr_probe_internal ...[2024-06-11 12:51:56.488787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:37.825 [2024-06-11 12:51:56.488930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:37.825 passed 00:06:37.825 Test: test_nvme_init_controllers ...[2024-06-11 12:51:56.489108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:37.825 passed 00:06:37.825 Test: test_nvme_driver_init ...[2024-06-11 12:51:56.489467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:37.825 [2024-06-11 12:51:56.489617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:37.825 [2024-06-11 12:51:56.602893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:37.825 [2024-06-11 12:51:56.603224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:37.825 passed 00:06:37.825 Test: test_spdk_nvme_detach ...passed 00:06:37.825 Test: test_nvme_completion_poll_cb ...passed 00:06:37.825 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:37.825 Test: test_nvme_allocate_request_null ...passed 00:06:37.825 Test: test_nvme_allocate_request ...passed 00:06:37.825 Test: test_nvme_free_request ...passed 00:06:37.825 Test: test_nvme_allocate_request_user_copy ...passed 00:06:37.825 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:37.825 Test: test_nvme_request_check_timeout ...passed 00:06:37.825 Test: test_nvme_wait_for_completion ...passed 00:06:37.825 Test: test_spdk_nvme_parse_func ...passed 00:06:37.825 Test: test_spdk_nvme_detach_async ...passed 00:06:37.825 Test: test_nvme_parse_addr ...[2024-06-11 12:51:56.606952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:37.825 passed 00:06:37.825 00:06:37.825 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.825 suites 1 1 n/a 0 0 00:06:37.825 tests 25 25 25 0 0 00:06:37.825 asserts 326 326 326 0 n/a 00:06:37.825 00:06:37.825 Elapsed time = 0.007 seconds 00:06:37.825 12:51:56 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:37.825 00:06:37.825 00:06:37.825 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.825 http://cunit.sourceforge.net/ 00:06:37.825 00:06:37.825 00:06:37.825 Suite: nvme_ctrlr 00:06:37.825 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-06-11 12:51:56.643839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:37.825 passed 00:06:37.825 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-06-11 12:51:56.645841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:37.825 passed 00:06:37.825 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-06-11 12:51:56.647398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:37.825 passed 00:06:37.825 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-06-11 12:51:56.648950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:37.825 passed 00:06:37.825 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-06-11 12:51:56.650545] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:37.825 [2024-06-11 12:51:56.651854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-11 12:51:56.653236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-11 12:51:56.654584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:37.825 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-06-11 12:51:56.657363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.084 [2024-06-11 12:51:56.659835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-11 12:51:56.661180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:38.084 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-06-11 12:51:56.663990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.084 [2024-06-11 12:51:56.665364] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-11 12:51:56.667988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:38.084 Test: test_nvme_ctrlr_init_delay ...[2024-06-11 12:51:56.670917] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.084 passed 00:06:38.084 Test: test_alloc_io_qpair_rr_1 ...[2024-06-11 12:51:56.672649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.084 [2024-06-11 12:51:56.672903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:38.084 [2024-06-11 12:51:56.673208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:38.084 [2024-06-11 12:51:56.673419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:38.084 [2024-06-11 12:51:56.673601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:38.084 passed 00:06:38.084 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:06:38.084 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:38.084 Test: test_alloc_io_qpair_wrr_1 ...[2024-06-11 12:51:56.674396] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.084 passed 00:06:38.084 Test: test_alloc_io_qpair_wrr_2 ...[2024-06-11 12:51:56.674947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.084 [2024-06-11 12:51:56.675217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:38.084 passed 00:06:38.084 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-06-11 12:51:56.675853] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:38.084 [2024-06-11 12:51:56.676146] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:38.084 [2024-06-11 12:51:56.676380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:38.084 [2024-06-11 12:51:56.676574] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:38.084 passed 00:06:38.084 Test: test_nvme_ctrlr_fail ...[2024-06-11 12:51:56.676941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:38.084 passed 00:06:38.084 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:06:38.084 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:38.084 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:38.084 Test: test_nvme_ctrlr_test_active_ns ...[2024-06-11 12:51:56.677857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:38.343 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:38.343 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:38.343 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-06-11 12:51:56.998125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-06-11 12:51:57.005805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-06-11 12:51:57.007421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 [2024-06-11 12:51:57.007602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2869:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:38.343 passed 00:06:38.343 Test: test_alloc_io_qpair_fail ...[2024-06-11 12:51:57.009104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 [2024-06-11 12:51:57.009324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:38.343 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:06:38.343 Test: test_nvme_ctrlr_set_state ...[2024-06-11 12:51:57.010357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1464:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-06-11 12:51:57.010727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-06-11 12:51:57.033980] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_ns_mgmt ...[2024-06-11 12:51:57.077212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_reset ...[2024-06-11 12:51:57.079114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_aer_callback ...[2024-06-11 12:51:57.079802] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-06-11 12:51:57.081613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:38.343 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:38.343 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-06-11 12:51:57.084091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:38.343 Test: test_nvme_ctrlr_ana_resize ...[2024-06-11 12:51:57.085929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:38.343 Test: test_nvme_transport_ctrlr_ready ...[2024-06-11 12:51:57.087937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:38.343 [2024-06-11 12:51:57.088082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:38.343 passed 00:06:38.343 Test: test_nvme_ctrlr_disable ...[2024-06-11 12:51:57.088411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:38.343 passed 00:06:38.343 00:06:38.343 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.343 suites 1 1 n/a 0 0 00:06:38.343 tests 43 43 43 0 0 00:06:38.343 asserts 10418 10418 10418 0 n/a 00:06:38.343 00:06:38.343 Elapsed time = 0.391 seconds 00:06:38.343 12:51:57 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:38.343 00:06:38.343 00:06:38.343 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.343 http://cunit.sourceforge.net/ 00:06:38.343 00:06:38.343 00:06:38.343 Suite: nvme_ctrlr_cmd 00:06:38.343 Test: test_get_log_pages ...passed 00:06:38.343 Test: test_set_feature_cmd ...passed 00:06:38.343 Test: test_set_feature_ns_cmd ...passed 00:06:38.343 Test: test_get_feature_cmd ...passed 00:06:38.343 Test: test_get_feature_ns_cmd ...passed 00:06:38.343 Test: test_abort_cmd ...passed 00:06:38.343 Test: test_set_host_id_cmds ...[2024-06-11 12:51:57.126042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:38.343 passed 00:06:38.343 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:38.343 Test: test_io_raw_cmd ...passed 00:06:38.343 Test: test_io_raw_cmd_with_md ...passed 00:06:38.343 Test: test_namespace_attach ...passed 00:06:38.343 Test: test_namespace_detach ...passed 00:06:38.343 Test: test_namespace_create ...passed 00:06:38.343 Test: test_namespace_delete ...passed 00:06:38.343 Test: test_doorbell_buffer_config ...passed 00:06:38.343 Test: test_format_nvme ...passed 00:06:38.343 Test: test_fw_commit ...passed 00:06:38.343 Test: test_fw_image_download ...passed 00:06:38.343 Test: test_sanitize ...passed 00:06:38.343 Test: test_directive ...passed 00:06:38.343 Test: test_nvme_request_add_abort ...passed 00:06:38.343 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:38.343 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:38.343 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:38.343 00:06:38.343 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.343 suites 1 1 n/a 0 0 00:06:38.343 tests 24 24 24 0 0 00:06:38.343 asserts 198 198 198 0 n/a 00:06:38.343 00:06:38.343 Elapsed time = 0.001 seconds 00:06:38.343 12:51:57 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:38.343 00:06:38.343 00:06:38.343 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.343 http://cunit.sourceforge.net/ 00:06:38.343 00:06:38.343 00:06:38.343 Suite: nvme_ctrlr_cmd 00:06:38.343 Test: test_geometry_cmd ...passed 00:06:38.343 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:38.343 00:06:38.343 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.343 suites 1 1 n/a 0 0 00:06:38.343 tests 2 2 2 0 0 00:06:38.343 asserts 7 7 7 0 n/a 00:06:38.343 00:06:38.343 Elapsed time = 0.000 seconds 00:06:38.343 12:51:57 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:38.603 00:06:38.603 00:06:38.603 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.603 http://cunit.sourceforge.net/ 00:06:38.603 00:06:38.603 00:06:38.603 Suite: nvme 00:06:38.603 Test: test_nvme_ns_construct ...passed 00:06:38.603 Test: test_nvme_ns_uuid ...passed 00:06:38.603 Test: test_nvme_ns_csi ...passed 00:06:38.603 Test: test_nvme_ns_data ...passed 00:06:38.603 Test: test_nvme_ns_set_identify_data ...passed 00:06:38.603 Test: test_spdk_nvme_ns_get_values ...passed 00:06:38.603 Test: test_spdk_nvme_ns_is_active ...passed 00:06:38.603 Test: spdk_nvme_ns_supports ...passed 00:06:38.603 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:38.603 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:38.603 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:38.603 Test: test_nvme_ns_find_id_desc ...passed 00:06:38.603 00:06:38.603 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.603 suites 1 1 n/a 0 0 00:06:38.603 tests 12 12 12 0 0 00:06:38.603 asserts 83 83 83 0 n/a 00:06:38.603 00:06:38.603 Elapsed time = 0.001 seconds 00:06:38.603 12:51:57 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:38.603 00:06:38.603 00:06:38.603 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.603 http://cunit.sourceforge.net/ 00:06:38.603 00:06:38.603 00:06:38.603 Suite: nvme_ns_cmd 00:06:38.603 Test: split_test ...passed 00:06:38.603 Test: split_test2 ...passed 00:06:38.603 Test: split_test3 ...passed 00:06:38.603 Test: split_test4 ...passed 00:06:38.603 Test: test_nvme_ns_cmd_flush ...passed 00:06:38.603 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:38.603 Test: test_nvme_ns_cmd_copy ...passed 00:06:38.603 Test: test_io_flags ...[2024-06-11 12:51:57.221133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:38.603 passed 00:06:38.603 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:38.603 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:38.603 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:38.603 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:38.603 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:38.603 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:38.603 Test: test_cmd_child_request ...passed 00:06:38.603 Test: test_nvme_ns_cmd_readv ...passed 00:06:38.603 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:38.603 Test: test_nvme_ns_cmd_writev ...[2024-06-11 12:51:57.224094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:38.603 passed 00:06:38.603 Test: test_nvme_ns_cmd_write_with_md ...passed 00:06:38.603 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:38.603 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:38.603 Test: test_nvme_ns_cmd_comparev ...passed 00:06:38.603 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:38.603 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:38.603 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:38.603 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:38.603 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:38.603 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-06-11 12:51:57.227585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:38.603 passed 00:06:38.603 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-06-11 12:51:57.227986] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:38.603 passed 00:06:38.603 Test: test_nvme_ns_cmd_verify ...passed 00:06:38.603 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:38.603 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:38.603 00:06:38.603 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.603 suites 1 1 n/a 0 0 00:06:38.603 tests 32 32 32 0 0 00:06:38.603 asserts 550 550 550 0 n/a 00:06:38.603 00:06:38.603 Elapsed time = 0.005 seconds 00:06:38.603 12:51:57 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:38.603 00:06:38.603 00:06:38.603 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.603 http://cunit.sourceforge.net/ 00:06:38.603 00:06:38.603 00:06:38.603 Suite: nvme_ns_cmd 00:06:38.603 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:38.604 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:38.604 00:06:38.604 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.604 suites 1 1 n/a 0 0 00:06:38.604 tests 12 12 12 0 0 00:06:38.604 asserts 123 123 123 0 n/a 00:06:38.604 00:06:38.604 Elapsed time = 0.001 seconds 00:06:38.604 12:51:57 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:38.604 00:06:38.604 00:06:38.604 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.604 http://cunit.sourceforge.net/ 00:06:38.604 00:06:38.604 00:06:38.604 Suite: nvme_qpair 00:06:38.604 Test: test3 ...passed 00:06:38.604 Test: test_ctrlr_failed ...passed 00:06:38.604 Test: struct_packing ...passed 00:06:38.604 Test: test_nvme_qpair_process_completions ...[2024-06-11 12:51:57.292566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:38.604 [2024-06-11 12:51:57.293168] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:38.604 [2024-06-11 12:51:57.293474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:38.604 [2024-06-11 12:51:57.293769] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:38.604 passed 00:06:38.604 Test: test_nvme_completion_is_retry ...passed 00:06:38.604 Test: test_get_status_string ...passed 00:06:38.604 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:06:38.604 Test: test_nvme_qpair_submit_request ...passed 00:06:38.604 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:38.604 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:38.604 Test: test_nvme_qpair_init_deinit ...[2024-06-11 12:51:57.295315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:38.604 passed 00:06:38.604 Test: test_nvme_get_sgl_print_info ...passed 00:06:38.604 00:06:38.604 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.604 suites 1 1 n/a 0 0 00:06:38.604 tests 12 12 12 0 0 00:06:38.604 asserts 154 154 154 0 n/a 00:06:38.604 00:06:38.604 Elapsed time = 0.002 seconds 00:06:38.604 12:51:57 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:38.604 00:06:38.604 00:06:38.604 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.604 http://cunit.sourceforge.net/ 00:06:38.604 00:06:38.604 00:06:38.604 Suite: nvme_pcie 00:06:38.604 Test: test_prp_list_append ...[2024-06-11 12:51:57.322709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:38.604 [2024-06-11 12:51:57.323163] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:38.604 [2024-06-11 12:51:57.323308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:38.604 [2024-06-11 12:51:57.323657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:38.604 [2024-06-11 12:51:57.323862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:38.604 passed 00:06:38.604 Test: test_nvme_pcie_hotplug_monitor ...passed 00:06:38.604 Test: test_shadow_doorbell_update ...passed 00:06:38.604 Test: test_build_contig_hw_sgl_request ...passed 00:06:38.604 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:38.604 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:38.604 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:38.604 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-06-11 12:51:57.325088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:38.604 passed 00:06:38.604 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:38.604 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:38.604 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-06-11 12:51:57.325576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:38.604 passed 00:06:38.604 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-06-11 12:51:57.325949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:38.604 passed 00:06:38.604 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-06-11 12:51:57.326301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:38.604 passed 00:06:38.604 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-06-11 12:51:57.326643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:38.604 passed 00:06:38.604 00:06:38.604 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.604 suites 1 1 n/a 0 0 00:06:38.604 tests 14 14 14 0 0 00:06:38.604 asserts 235 235 235 0 n/a 00:06:38.604 00:06:38.604 Elapsed time = 0.002 seconds 00:06:38.604 12:51:57 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:38.604 00:06:38.604 00:06:38.604 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.604 http://cunit.sourceforge.net/ 00:06:38.604 00:06:38.604 00:06:38.604 Suite: nvme_ns_cmd 00:06:38.604 Test: nvme_poll_group_create_test ...passed 00:06:38.604 Test: nvme_poll_group_add_remove_test ...passed 00:06:38.604 Test: nvme_poll_group_process_completions ...passed 00:06:38.604 Test: nvme_poll_group_destroy_test ...passed 00:06:38.604 Test: nvme_poll_group_get_free_stats ...passed 00:06:38.604 00:06:38.604 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.604 suites 1 1 n/a 0 0 00:06:38.604 tests 5 5 5 0 0 00:06:38.604 asserts 75 75 75 0 n/a 00:06:38.604 00:06:38.604 Elapsed time = 0.000 seconds 00:06:38.604 12:51:57 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:38.604 00:06:38.604 00:06:38.604 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.604 http://cunit.sourceforge.net/ 00:06:38.604 00:06:38.604 00:06:38.604 Suite: nvme_quirks 00:06:38.604 Test: test_nvme_quirks_striping ...passed 00:06:38.604 00:06:38.604 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.604 suites 1 1 n/a 0 0 00:06:38.604 tests 1 1 1 0 0 00:06:38.604 asserts 5 5 5 0 n/a 00:06:38.604 00:06:38.604 Elapsed time = 0.000 seconds 00:06:38.604 12:51:57 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:38.604 00:06:38.604 00:06:38.604 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.604 http://cunit.sourceforge.net/ 00:06:38.604 00:06:38.604 00:06:38.604 Suite: nvme_tcp 00:06:38.604 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:38.604 Test: test_nvme_tcp_build_iovs ...passed 00:06:38.604 Test: test_nvme_tcp_build_sgl_request ...[2024-06-11 12:51:57.411897] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd844e74e0, and the iovcnt=16, remaining_size=28672 00:06:38.604 passed 00:06:38.604 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:38.604 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:38.604 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:38.604 Test: test_nvme_tcp_req_get ...passed 00:06:38.604 Test: test_nvme_tcp_req_init ...passed 00:06:38.604 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:38.604 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:38.604 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-06-11 12:51:57.414363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e9200 is same with the state(6) to be set 00:06:38.604 passed 00:06:38.604 Test: test_nvme_tcp_alloc_reqs ...passed 00:06:38.604 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-06-11 12:51:57.415064] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8390 is same with the state(5) to be set 00:06:38.604 passed 00:06:38.604 Test: test_nvme_tcp_pdu_ch_handle ...[2024-06-11 12:51:57.415380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd844e8ec0 00:06:38.604 [2024-06-11 12:51:57.415523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:38.604 [2024-06-11 12:51:57.415722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8850 is same with the state(5) to be set 00:06:38.604 [2024-06-11 12:51:57.415883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:38.604 [2024-06-11 12:51:57.416060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8850 is same with the state(5) to be set 00:06:38.604 [2024-06-11 12:51:57.416206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:38.604 [2024-06-11 12:51:57.416342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8850 is same with the state(5) to be set 00:06:38.604 [2024-06-11 12:51:57.416488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8850 is same with the state(5) to be set 00:06:38.605 [2024-06-11 12:51:57.416639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8850 is same with the state(5) to be set 00:06:38.605 [2024-06-11 12:51:57.416799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8850 is same with the state(5) to be set 00:06:38.605 [2024-06-11 12:51:57.416935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8850 is same with the state(5) to be set 00:06:38.605 [2024-06-11 12:51:57.417089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e8850 is same with the state(5) to be set 00:06:38.605 passed 00:06:38.605 Test: test_nvme_tcp_qpair_connect_sock ...[2024-06-11 12:51:57.417582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:38.605 [2024-06-11 12:51:57.417747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:38.605 [2024-06-11 12:51:57.418068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:38.605 passed 00:06:38.605 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:38.605 Test: test_nvme_tcp_c2h_payload_handle ...[2024-06-11 12:51:57.418589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd844e8a00): PDU Sequence Error 00:06:38.605 passed 00:06:38.605 Test: test_nvme_tcp_icresp_handle ...[2024-06-11 12:51:57.418995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:38.605 [2024-06-11 12:51:57.419150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:38.605 [2024-06-11 12:51:57.419292] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e83a0 is same with the state(5) to be set 00:06:38.605 [2024-06-11 12:51:57.419474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:38.605 [2024-06-11 12:51:57.419628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e83a0 is same with the state(5) to be set 00:06:38.605 [2024-06-11 12:51:57.419786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e83a0 is same with the state(0) to be set 00:06:38.605 passed 00:06:38.605 Test: test_nvme_tcp_pdu_payload_handle ...[2024-06-11 12:51:57.420103] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd844e8ec0): PDU Sequence Error 00:06:38.605 passed 00:06:38.605 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-06-11 12:51:57.420507] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd844e7680 00:06:38.605 passed 00:06:38.605 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:06:38.605 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-06-11 12:51:57.421117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd844e6d00, errno=0, rc=0 00:06:38.605 [2024-06-11 12:51:57.421270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e6d00 is same with the state(5) to be set 00:06:38.605 [2024-06-11 12:51:57.421452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd844e6d00 is same with the state(5) to be set 00:06:38.605 [2024-06-11 12:51:57.421601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd844e6d00 (0): Success 00:06:38.605 [2024-06-11 12:51:57.421762] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd844e6d00 (0): Success 00:06:38.605 passed 00:06:38.863 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-06-11 12:51:57.532600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:38.863 [2024-06-11 12:51:57.532954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:38.863 passed 00:06:38.863 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:06:38.863 Test: test_nvme_tcp_poll_group_get_stats ...[2024-06-11 12:51:57.533688] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:38.863 [2024-06-11 12:51:57.533852] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:38.863 passed 00:06:38.863 Test: test_nvme_tcp_ctrlr_construct ...[2024-06-11 12:51:57.534319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:38.863 [2024-06-11 12:51:57.534482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:38.863 [2024-06-11 12:51:57.534697] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:38.863 [2024-06-11 12:51:57.534865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:38.863 [2024-06-11 12:51:57.535077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:06:38.863 [2024-06-11 12:51:57.535247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:38.863 passed 00:06:38.863 Test: test_nvme_tcp_qpair_submit_request ...[2024-06-11 12:51:57.535672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:06:38.863 [2024-06-11 12:51:57.535818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:38.863 passed 00:06:38.863 00:06:38.863 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.863 suites 1 1 n/a 0 0 00:06:38.863 tests 27 27 27 0 0 00:06:38.863 asserts 624 624 624 0 n/a 00:06:38.863 00:06:38.863 Elapsed time = 0.117 seconds 00:06:38.863 12:51:57 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:38.863 00:06:38.864 00:06:38.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.864 http://cunit.sourceforge.net/ 00:06:38.864 00:06:38.864 00:06:38.864 Suite: nvme_transport 00:06:38.864 Test: test_nvme_get_transport ...passed 00:06:38.864 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:38.864 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:38.864 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:38.864 Test: test_ctrlr_get_memory_domains ...passed 00:06:38.864 00:06:38.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.864 suites 1 1 n/a 0 0 00:06:38.864 tests 5 5 5 0 0 00:06:38.864 asserts 28 28 28 0 n/a 00:06:38.864 00:06:38.864 Elapsed time = 0.000 seconds 00:06:38.864 12:51:57 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:38.864 00:06:38.864 00:06:38.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.864 http://cunit.sourceforge.net/ 00:06:38.864 00:06:38.864 00:06:38.864 Suite: nvme_io_msg 00:06:38.864 Test: test_nvme_io_msg_send ...passed 00:06:38.864 Test: test_nvme_io_msg_process ...passed 00:06:38.864 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:38.864 00:06:38.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.864 suites 1 1 n/a 0 0 00:06:38.864 tests 3 3 3 0 0 00:06:38.864 asserts 56 56 56 0 n/a 00:06:38.864 00:06:38.864 Elapsed time = 0.000 seconds 00:06:38.864 12:51:57 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:38.864 00:06:38.864 00:06:38.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.864 http://cunit.sourceforge.net/ 00:06:38.864 00:06:38.864 00:06:38.864 Suite: nvme_pcie_common 00:06:38.864 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-06-11 12:51:57.630100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:38.864 passed 00:06:38.864 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:06:38.864 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:38.864 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-06-11 12:51:57.631381] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:38.864 [2024-06-11 12:51:57.631596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:38.864 [2024-06-11 12:51:57.631755] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:38.864 passed 00:06:38.864 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:06:38.864 Test: test_nvme_pcie_poll_group_get_stats ...[2024-06-11 12:51:57.632598] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:38.864 [2024-06-11 12:51:57.632739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:38.864 passed 00:06:38.864 00:06:38.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.864 suites 1 1 n/a 0 0 00:06:38.864 tests 6 6 6 0 0 00:06:38.864 asserts 148 148 148 0 n/a 00:06:38.864 00:06:38.864 Elapsed time = 0.002 seconds 00:06:38.864 12:51:57 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:38.864 00:06:38.864 00:06:38.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.864 http://cunit.sourceforge.net/ 00:06:38.864 00:06:38.864 00:06:38.864 Suite: nvme_fabric 00:06:38.864 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:38.864 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:38.864 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:38.864 Test: test_nvme_fabric_discover_probe ...passed 00:06:38.864 Test: test_nvme_fabric_qpair_connect ...[2024-06-11 12:51:57.656749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:38.864 passed 00:06:38.864 00:06:38.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.864 suites 1 1 n/a 0 0 00:06:38.864 tests 5 5 5 0 0 00:06:38.864 asserts 60 60 60 0 n/a 00:06:38.864 00:06:38.864 Elapsed time = 0.001 seconds 00:06:38.864 12:51:57 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:38.864 00:06:38.864 00:06:38.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.864 http://cunit.sourceforge.net/ 00:06:38.864 00:06:38.864 00:06:38.864 Suite: nvme_opal 00:06:38.864 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:38.864 Test: test_opal_add_short_atom_header ...[2024-06-11 12:51:57.687266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:38.864 passed 00:06:38.864 00:06:38.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.864 suites 1 1 n/a 0 0 00:06:38.864 tests 2 2 2 0 0 00:06:38.864 asserts 22 22 22 0 n/a 00:06:38.864 00:06:38.864 Elapsed time = 0.001 seconds 00:06:39.122 ************************************ 00:06:39.122 END TEST unittest_nvme 00:06:39.122 ************************************ 00:06:39.122 00:06:39.122 real 0m1.234s 00:06:39.122 user 0m0.621s 00:06:39.122 sys 0m0.413s 00:06:39.122 12:51:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.122 12:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.122 12:51:57 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:39.122 12:51:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.122 12:51:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.122 12:51:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.122 ************************************ 00:06:39.122 START TEST unittest_log 00:06:39.122 ************************************ 00:06:39.122 12:51:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:39.122 00:06:39.122 00:06:39.122 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.122 http://cunit.sourceforge.net/ 00:06:39.122 00:06:39.122 00:06:39.122 Suite: log 00:06:39.122 Test: log_test ...[2024-06-11 12:51:57.767598] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:06:39.122 passed 00:06:39.122 Test: deprecation ...[2024-06-11 12:51:57.767870] log_ut.c: 55:log_test: *DEBUG*: log test 00:06:39.122 log dump test: 00:06:39.122 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:39.122 spdk dump test: 00:06:39.122 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:39.122 spdk dump test: 00:06:39.122 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:39.122 00000010 65 20 63 68 61 72 73 e chars 00:06:40.082 passed 00:06:40.082 00:06:40.082 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.082 suites 1 1 n/a 0 0 00:06:40.082 tests 2 2 2 0 0 00:06:40.082 asserts 73 73 73 0 n/a 00:06:40.082 00:06:40.082 Elapsed time = 0.001 seconds 00:06:40.082 00:06:40.082 real 0m1.031s 00:06:40.082 user 0m0.012s 00:06:40.082 sys 0m0.019s 00:06:40.082 ************************************ 00:06:40.082 END TEST unittest_log 00:06:40.082 ************************************ 00:06:40.082 12:51:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.082 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.082 12:51:58 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:40.082 12:51:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.082 12:51:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.082 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.082 ************************************ 00:06:40.082 START TEST unittest_lvol 00:06:40.082 ************************************ 00:06:40.082 12:51:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:40.082 00:06:40.082 00:06:40.082 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.082 http://cunit.sourceforge.net/ 00:06:40.082 00:06:40.082 00:06:40.082 Suite: lvol 00:06:40.082 Test: lvs_init_unload_success ...[2024-06-11 12:51:58.847290] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:40.082 passed 00:06:40.082 Test: lvs_init_destroy_success ...[2024-06-11 12:51:58.848383] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:40.082 passed 00:06:40.082 Test: lvs_init_opts_success ...passed 00:06:40.082 Test: lvs_unload_lvs_is_null_fail ...[2024-06-11 12:51:58.849558] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:40.082 passed 00:06:40.082 Test: lvs_names ...[2024-06-11 12:51:58.850250] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:40.082 [2024-06-11 12:51:58.850679] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:40.082 [2024-06-11 12:51:58.851150] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:40.082 passed 00:06:40.082 Test: lvol_create_destroy_success ...passed 00:06:40.082 Test: lvol_create_fail ...[2024-06-11 12:51:58.852843] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:40.082 [2024-06-11 12:51:58.853242] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:40.082 passed 00:06:40.082 Test: lvol_destroy_fail ...[2024-06-11 12:51:58.854391] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:40.082 passed 00:06:40.082 Test: lvol_close ...[2024-06-11 12:51:58.855299] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:40.082 [2024-06-11 12:51:58.855632] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:40.082 passed 00:06:40.082 Test: lvol_resize ...passed 00:06:40.082 Test: lvol_set_read_only ...passed 00:06:40.082 Test: test_lvs_load ...[2024-06-11 12:51:58.857972] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:40.082 [2024-06-11 12:51:58.858242] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:40.082 passed 00:06:40.082 Test: lvols_load ...[2024-06-11 12:51:58.859126] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:40.082 [2024-06-11 12:51:58.859465] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:40.082 passed 00:06:40.082 Test: lvol_open ...passed 00:06:40.082 Test: lvol_snapshot ...passed 00:06:40.082 Test: lvol_snapshot_fail ...[2024-06-11 12:51:58.861749] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:40.082 passed 00:06:40.082 Test: lvol_clone ...passed 00:06:40.082 Test: lvol_clone_fail ...[2024-06-11 12:51:58.863795] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:40.082 passed 00:06:40.082 Test: lvol_iter_clones ...passed 00:06:40.082 Test: lvol_refcnt ...[2024-06-11 12:51:58.865336] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 3fed4cd4-407c-4091-8483-90a07c1a0d01 because it is still open 00:06:40.083 passed 00:06:40.083 Test: lvol_names ...[2024-06-11 12:51:58.866214] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:40.083 [2024-06-11 12:51:58.866546] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:40.083 [2024-06-11 12:51:58.867038] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:40.083 passed 00:06:40.083 Test: lvol_create_thin_provisioned ...passed 00:06:40.083 Test: lvol_rename ...[2024-06-11 12:51:58.868712] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:40.083 [2024-06-11 12:51:58.869047] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:40.083 passed 00:06:40.083 Test: lvs_rename ...[2024-06-11 12:51:58.869928] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:40.083 passed 00:06:40.083 Test: lvol_inflate ...[2024-06-11 12:51:58.870798] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:40.083 passed 00:06:40.083 Test: lvol_decouple_parent ...[2024-06-11 12:51:58.871708] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:40.083 passed 00:06:40.083 Test: lvol_get_xattr ...passed 00:06:40.083 Test: lvol_esnap_reload ...passed 00:06:40.083 Test: lvol_esnap_create_bad_args ...[2024-06-11 12:51:58.873793] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:40.083 [2024-06-11 12:51:58.874055] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:40.083 [2024-06-11 12:51:58.874326] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:40.083 [2024-06-11 12:51:58.874665] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:40.083 [2024-06-11 12:51:58.875008] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:40.083 passed 00:06:40.083 Test: lvol_esnap_create_delete ...passed 00:06:40.083 Test: lvol_esnap_load_esnaps ...[2024-06-11 12:51:58.876606] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:40.083 passed 00:06:40.083 Test: lvol_esnap_missing ...[2024-06-11 12:51:58.877448] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:40.083 [2024-06-11 12:51:58.877726] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:40.083 passed 00:06:40.083 Test: lvol_esnap_hotplug ... 00:06:40.083 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:40.083 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:40.083 [2024-06-11 12:51:58.879718] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 09d7883e-bbae-4efc-81c1-6f3c29cc5fa0: failed to create esnap bs_dev: error -12 00:06:40.083 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:40.083 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:40.083 [2024-06-11 12:51:58.880799] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 547f79f5-c385-4e9a-a8be-e012af835e85: failed to create esnap bs_dev: error -12 00:06:40.083 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:40.083 [2024-06-11 12:51:58.881394] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a548ff4a-6c3a-4b5d-939b-f570e3f549b8: failed to create esnap bs_dev: error -12 00:06:40.083 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:40.083 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:40.083 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:40.083 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:40.083 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:40.083 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:40.083 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:40.083 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:40.083 passed 00:06:40.083 Test: lvol_get_by ...passed 00:06:40.083 00:06:40.083 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.083 suites 1 1 n/a 0 0 00:06:40.083 tests 34 34 34 0 0 00:06:40.083 asserts 1439 1439 1439 0 n/a 00:06:40.083 00:06:40.083 Elapsed time = 0.016 seconds 00:06:40.083 ************************************ 00:06:40.083 END TEST unittest_lvol 00:06:40.083 ************************************ 00:06:40.083 00:06:40.083 real 0m0.074s 00:06:40.083 user 0m0.023s 00:06:40.083 sys 0m0.027s 00:06:40.083 12:51:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.083 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.341 12:51:58 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:40.341 12:51:58 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:40.341 12:51:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.341 12:51:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.341 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.341 ************************************ 00:06:40.341 START TEST unittest_nvme_rdma 00:06:40.341 ************************************ 00:06:40.341 12:51:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:40.341 00:06:40.341 00:06:40.341 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.341 http://cunit.sourceforge.net/ 00:06:40.341 00:06:40.341 00:06:40.341 Suite: nvme_rdma 00:06:40.341 Test: test_nvme_rdma_build_sgl_request ...[2024-06-11 12:51:58.970459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:40.341 [2024-06-11 12:51:58.970983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:40.341 [2024-06-11 12:51:58.971221] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:40.341 passed 00:06:40.341 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:40.341 Test: test_nvme_rdma_build_contig_request ...[2024-06-11 12:51:58.971584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:40.341 passed 00:06:40.341 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:40.341 Test: test_nvme_rdma_create_reqs ...[2024-06-11 12:51:58.972200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:40.341 passed 00:06:40.341 Test: test_nvme_rdma_create_rsps ...[2024-06-11 12:51:58.972913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:40.341 passed 00:06:40.341 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-06-11 12:51:58.973501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:40.341 [2024-06-11 12:51:58.973729] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:40.341 passed 00:06:40.341 Test: test_nvme_rdma_poller_create ...passed 00:06:40.341 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-06-11 12:51:58.974427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:40.341 passed 00:06:40.341 Test: test_nvme_rdma_ctrlr_construct ...passed 00:06:40.341 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:40.341 Test: test_nvme_rdma_req_init ...passed 00:06:40.341 Test: test_nvme_rdma_validate_cm_event ...[2024-06-11 12:51:58.975500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:40.341 [2024-06-11 12:51:58.975649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:40.341 passed 00:06:40.341 Test: test_nvme_rdma_qpair_init ...passed 00:06:40.341 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:40.341 Test: test_nvme_rdma_memory_domain ...[2024-06-11 12:51:58.976425] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:40.341 passed 00:06:40.341 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:40.341 Test: test_rdma_get_memory_translation ...[2024-06-11 12:51:58.976883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:40.341 [2024-06-11 12:51:58.977036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:40.341 passed 00:06:40.341 Test: test_get_rdma_qpair_from_wc ...passed 00:06:40.341 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:40.341 Test: test_nvme_rdma_poll_group_get_stats ...[2024-06-11 12:51:58.977588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:40.341 [2024-06-11 12:51:58.977752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:40.341 passed 00:06:40.341 Test: test_nvme_rdma_qpair_set_poller ...[2024-06-11 12:51:58.978166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:40.341 [2024-06-11 12:51:58.978309] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:40.341 [2024-06-11 12:51:58.978432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffcaa47e3e0 on poll group 0x60b0000001a0 00:06:40.341 [2024-06-11 12:51:58.978590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:40.341 [2024-06-11 12:51:58.978732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:40.341 [2024-06-11 12:51:58.978872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffcaa47e3e0 on poll group 0x60b0000001a0 00:06:40.341 [2024-06-11 12:51:58.979052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:40.341 passed 00:06:40.341 00:06:40.341 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.341 suites 1 1 n/a 0 0 00:06:40.341 tests 22 22 22 0 0 00:06:40.341 asserts 412 412 412 0 n/a 00:06:40.341 00:06:40.341 Elapsed time = 0.004 seconds 00:06:40.341 00:06:40.341 real 0m0.042s 00:06:40.341 user 0m0.026s 00:06:40.341 sys 0m0.010s 00:06:40.341 12:51:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.341 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.341 ************************************ 00:06:40.341 END TEST unittest_nvme_rdma 00:06:40.341 ************************************ 00:06:40.341 12:51:59 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:40.341 12:51:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.341 12:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.341 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.341 ************************************ 00:06:40.341 START TEST unittest_nvmf_transport 00:06:40.341 ************************************ 00:06:40.341 12:51:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:40.341 00:06:40.341 00:06:40.341 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.341 http://cunit.sourceforge.net/ 00:06:40.341 00:06:40.341 00:06:40.341 Suite: nvmf 00:06:40.341 Test: test_spdk_nvmf_transport_create ...[2024-06-11 12:51:59.061023] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:40.341 [2024-06-11 12:51:59.061361] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:40.341 [2024-06-11 12:51:59.061539] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:40.341 [2024-06-11 12:51:59.061760] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:40.341 passed 00:06:40.341 Test: test_nvmf_transport_poll_group_create ...passed 00:06:40.341 Test: test_spdk_nvmf_transport_opts_init ...[2024-06-11 12:51:59.062404] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:40.341 [2024-06-11 12:51:59.062554] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:40.341 [2024-06-11 12:51:59.062687] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:40.341 passed 00:06:40.341 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:40.341 00:06:40.341 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.341 suites 1 1 n/a 0 0 00:06:40.341 tests 4 4 4 0 0 00:06:40.341 asserts 49 49 49 0 n/a 00:06:40.341 00:06:40.341 Elapsed time = 0.001 seconds 00:06:40.341 00:06:40.341 real 0m0.033s 00:06:40.341 user 0m0.012s 00:06:40.341 sys 0m0.020s 00:06:40.341 12:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.341 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.341 ************************************ 00:06:40.341 END TEST unittest_nvmf_transport 00:06:40.341 ************************************ 00:06:40.341 12:51:59 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:40.341 12:51:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.341 12:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.341 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.341 ************************************ 00:06:40.341 START TEST unittest_rdma 00:06:40.341 ************************************ 00:06:40.341 12:51:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:40.341 00:06:40.341 00:06:40.341 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.341 http://cunit.sourceforge.net/ 00:06:40.341 00:06:40.341 00:06:40.341 Suite: rdma_common 00:06:40.342 Test: test_spdk_rdma_pd ...[2024-06-11 12:51:59.146137] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:40.342 [2024-06-11 12:51:59.146570] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:40.342 passed 00:06:40.342 00:06:40.342 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.342 suites 1 1 n/a 0 0 00:06:40.342 tests 1 1 1 0 0 00:06:40.342 asserts 31 31 31 0 n/a 00:06:40.342 00:06:40.342 Elapsed time = 0.001 seconds 00:06:40.342 00:06:40.342 real 0m0.033s 00:06:40.342 user 0m0.023s 00:06:40.342 sys 0m0.009s 00:06:40.342 12:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.342 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.342 ************************************ 00:06:40.342 END TEST unittest_rdma 00:06:40.342 ************************************ 00:06:40.599 12:51:59 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:40.599 12:51:59 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:40.599 12:51:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.599 12:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.599 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.599 ************************************ 00:06:40.599 START TEST unittest_nvme_cuse 00:06:40.599 ************************************ 00:06:40.599 12:51:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:40.599 00:06:40.599 00:06:40.599 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.599 http://cunit.sourceforge.net/ 00:06:40.599 00:06:40.599 00:06:40.599 Suite: nvme_cuse 00:06:40.599 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:40.599 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:40.599 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:40.599 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:40.599 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:40.599 Test: test_cuse_nvme_submit_io ...[2024-06-11 12:51:59.234757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:40.599 passed 00:06:40.599 Test: test_cuse_nvme_reset ...[2024-06-11 12:51:59.235296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:40.599 passed 00:06:40.599 Test: test_nvme_cuse_stop ...passed 00:06:40.599 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:40.599 00:06:40.599 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.599 suites 1 1 n/a 0 0 00:06:40.599 tests 9 9 9 0 0 00:06:40.599 asserts 121 121 121 0 n/a 00:06:40.599 00:06:40.599 Elapsed time = 0.001 seconds 00:06:40.599 00:06:40.599 real 0m0.034s 00:06:40.599 user 0m0.017s 00:06:40.599 sys 0m0.015s 00:06:40.599 12:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.599 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.599 ************************************ 00:06:40.599 END TEST unittest_nvme_cuse 00:06:40.599 ************************************ 00:06:40.599 12:51:59 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:06:40.599 12:51:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.599 12:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.599 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.599 ************************************ 00:06:40.599 START TEST unittest_nvmf 00:06:40.599 ************************************ 00:06:40.599 12:51:59 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:06:40.599 12:51:59 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:40.599 00:06:40.599 00:06:40.599 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.599 http://cunit.sourceforge.net/ 00:06:40.599 00:06:40.599 00:06:40.599 Suite: nvmf 00:06:40.599 Test: test_get_log_page ...passed 00:06:40.599 Test: test_process_fabrics_cmd ...passed 00:06:40.599 Test: test_connect ...[2024-06-11 12:51:59.315208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:40.599 [2024-06-11 12:51:59.315842] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:40.599 [2024-06-11 12:51:59.315922] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:40.599 [2024-06-11 12:51:59.315970] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:40.599 [2024-06-11 12:51:59.316007] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:40.599 [2024-06-11 12:51:59.316074] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:40.599 [2024-06-11 12:51:59.316096] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:40.599 [2024-06-11 12:51:59.316168] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:40.599 [2024-06-11 12:51:59.316199] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:40.599 [2024-06-11 12:51:59.316295] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:40.599 [2024-06-11 12:51:59.316348] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:40.599 [2024-06-11 12:51:59.316566] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:40.599 [2024-06-11 12:51:59.316627] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:40.599 [2024-06-11 12:51:59.316712] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:40.599 [2024-06-11 12:51:59.316760] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:40.599 [2024-06-11 12:51:59.316828] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:40.599 passed 00:06:40.599 Test: test_get_ns_id_desc_list ...[2024-06-11 12:51:59.316932] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:40.599 passed 00:06:40.599 Test: test_identify_ns ...[2024-06-11 12:51:59.317098] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.599 [2024-06-11 12:51:59.317247] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:40.599 [2024-06-11 12:51:59.317348] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:40.599 passed 00:06:40.599 Test: test_identify_ns_iocs_specific ...[2024-06-11 12:51:59.317473] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.599 [2024-06-11 12:51:59.317694] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:40.599 passed 00:06:40.599 Test: test_reservation_write_exclusive ...passed 00:06:40.599 Test: test_reservation_exclusive_access ...passed 00:06:40.599 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:40.599 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:40.599 Test: test_reservation_notification_log_page ...passed 00:06:40.599 Test: test_get_dif_ctx ...passed 00:06:40.599 Test: test_set_get_features ...[2024-06-11 12:51:59.318138] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:40.599 [2024-06-11 12:51:59.318172] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:40.599 [2024-06-11 12:51:59.318201] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:40.599 passed 00:06:40.599 Test: test_identify_ctrlr ...passed 00:06:40.600 Test: test_identify_ctrlr_iocs_specific ...[2024-06-11 12:51:59.318237] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:40.600 passed 00:06:40.600 Test: test_custom_admin_cmd ...passed 00:06:40.600 Test: test_fused_compare_and_write ...passed 00:06:40.600 Test: test_multi_async_event_reqs ...[2024-06-11 12:51:59.318563] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:40.600 [2024-06-11 12:51:59.318598] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:40.600 [2024-06-11 12:51:59.318627] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:40.600 passed 00:06:40.600 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:06:40.600 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:40.600 Test: test_multi_async_events ...passed 00:06:40.600 Test: test_rae ...passed 00:06:40.600 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:40.600 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:40.600 Test: test_spdk_nvmf_request_zcopy_start ...[2024-06-11 12:51:59.318977] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:40.600 passed 00:06:40.600 Test: test_zcopy_read ...passed 00:06:40.600 Test: test_zcopy_write ...passed 00:06:40.600 Test: test_nvmf_property_set ...passed 00:06:40.600 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-06-11 12:51:59.319092] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:40.600 passed 00:06:40.600 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:06:40.600 00:06:40.600 [2024-06-11 12:51:59.319141] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:40.600 [2024-06-11 12:51:59.319175] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:40.600 [2024-06-11 12:51:59.319200] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:40.600 [2024-06-11 12:51:59.319218] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:40.600 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.600 suites 1 1 n/a 0 0 00:06:40.600 tests 30 30 30 0 0 00:06:40.600 asserts 885 885 885 0 n/a 00:06:40.600 00:06:40.600 Elapsed time = 0.004 seconds 00:06:40.600 12:51:59 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:40.600 00:06:40.600 00:06:40.600 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.600 http://cunit.sourceforge.net/ 00:06:40.600 00:06:40.600 00:06:40.600 Suite: nvmf 00:06:40.600 Test: test_get_rw_params ...passed 00:06:40.600 Test: test_lba_in_range ...passed 00:06:40.600 Test: test_get_dif_ctx ...passed 00:06:40.600 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:40.600 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-06-11 12:51:59.352793] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:40.600 [2024-06-11 12:51:59.353222] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:40.600 [2024-06-11 12:51:59.353368] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:40.600 passed 00:06:40.600 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-06-11 12:51:59.353484] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:40.600 [2024-06-11 12:51:59.353598] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:40.600 passed 00:06:40.600 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-06-11 12:51:59.353774] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:40.600 [2024-06-11 12:51:59.353819] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:40.600 [2024-06-11 12:51:59.353910] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:40.600 passed 00:06:40.600 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:06:40.600 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed[2024-06-11 12:51:59.353962] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:40.600 00:06:40.600 00:06:40.600 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.600 suites 1 1 n/a 0 0 00:06:40.600 tests 9 9 9 0 0 00:06:40.600 asserts 157 157 157 0 n/a 00:06:40.600 00:06:40.600 Elapsed time = 0.001 seconds 00:06:40.600 12:51:59 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:40.600 00:06:40.600 00:06:40.600 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.600 http://cunit.sourceforge.net/ 00:06:40.600 00:06:40.600 00:06:40.600 Suite: nvmf 00:06:40.600 Test: test_discovery_log ...passed 00:06:40.600 Test: test_discovery_log_with_filters ...passed 00:06:40.600 00:06:40.600 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.600 suites 1 1 n/a 0 0 00:06:40.600 tests 2 2 2 0 0 00:06:40.600 asserts 238 238 238 0 n/a 00:06:40.600 00:06:40.600 Elapsed time = 0.002 seconds 00:06:40.600 12:51:59 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:40.600 00:06:40.600 00:06:40.600 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.600 http://cunit.sourceforge.net/ 00:06:40.600 00:06:40.600 00:06:40.600 Suite: nvmf 00:06:40.600 Test: nvmf_test_create_subsystem ...[2024-06-11 12:51:59.429025] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:40.600 [2024-06-11 12:51:59.429323] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:40.600 [2024-06-11 12:51:59.429396] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:40.600 [2024-06-11 12:51:59.429446] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:40.600 [2024-06-11 12:51:59.429472] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:40.600 [2024-06-11 12:51:59.429503] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:40.600 [2024-06-11 12:51:59.429601] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:40.600 [2024-06-11 12:51:59.429764] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:40.600 passed 00:06:40.600 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-06-11 12:51:59.429862] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:40.600 [2024-06-11 12:51:59.429893] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:40.600 [2024-06-11 12:51:59.429913] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:40.600 [2024-06-11 12:51:59.430078] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:40.600 passed 00:06:40.600 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:40.600 Test: test_reservation_register ...[2024-06-11 12:51:59.430157] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:40.600 [2024-06-11 12:51:59.430424] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:40.600 [2024-06-11 12:51:59.430543] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:40.600 passed 00:06:40.600 Test: test_reservation_register_with_ptpl ...passed 00:06:40.600 Test: test_reservation_acquire_preempt_1 ...[2024-06-11 12:51:59.431578] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:40.600 passed 00:06:40.600 Test: test_reservation_acquire_release_with_ptpl ...passed 00:06:40.600 Test: test_reservation_release ...[2024-06-11 12:51:59.433235] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:40.600 passed 00:06:40.600 Test: test_reservation_unregister_notification ...[2024-06-11 12:51:59.433498] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:40.600 passed 00:06:40.600 Test: test_reservation_release_notification ...[2024-06-11 12:51:59.433788] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:40.600 passed 00:06:40.600 Test: test_reservation_release_notification_write_exclusive ...[2024-06-11 12:51:59.434024] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:40.600 passed 00:06:40.600 Test: test_reservation_clear_notification ...[2024-06-11 12:51:59.434235] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:40.600 passed 00:06:40.600 Test: test_reservation_preempt_notification ...[2024-06-11 12:51:59.434465] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:40.600 passed 00:06:40.600 Test: test_spdk_nvmf_ns_event ...passed 00:06:40.600 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:40.600 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:40.600 Test: test_spdk_nvmf_subsystem_add_host ...[2024-06-11 12:51:59.435237] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:40.600 [2024-06-11 12:51:59.435335] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:40.600 passed 00:06:40.600 Test: test_nvmf_ns_reservation_report ...[2024-06-11 12:51:59.435465] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:40.600 passed 00:06:40.600 Test: test_nvmf_nqn_is_valid ...[2024-06-11 12:51:59.435544] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:40.858 passed 00:06:40.858 Test: test_nvmf_ns_reservation_restore ...[2024-06-11 12:51:59.435578] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:a3d8b6b4-7c75-42cb-b560-61934e0cef6": uuid is not the correct length 00:06:40.858 [2024-06-11 12:51:59.435605] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:40.858 [2024-06-11 12:51:59.435718] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:40.858 passed 00:06:40.858 Test: test_nvmf_subsystem_state_change ...passed 00:06:40.858 Test: test_nvmf_reservation_custom_ops ...passed 00:06:40.858 00:06:40.858 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.858 suites 1 1 n/a 0 0 00:06:40.858 tests 22 22 22 0 0 00:06:40.858 asserts 407 407 407 0 n/a 00:06:40.858 00:06:40.858 Elapsed time = 0.008 seconds 00:06:40.858 12:51:59 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:40.858 00:06:40.858 00:06:40.858 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.858 http://cunit.sourceforge.net/ 00:06:40.858 00:06:40.858 00:06:40.858 Suite: nvmf 00:06:40.858 Test: test_nvmf_tcp_create ...[2024-06-11 12:51:59.482672] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:40.858 passed 00:06:40.858 Test: test_nvmf_tcp_destroy ...passed 00:06:40.858 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:40.858 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:40.858 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:40.858 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:40.858 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:40.858 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-06-11 12:51:59.552791] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.858 passed 00:06:40.858 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed[2024-06-11 12:51:59.552859] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fad10 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.552920] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fad10 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.552946] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.858 [2024-06-11 12:51:59.552963] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fad10 is same with the state(5) to be set 00:06:40.858 00:06:40.858 Test: test_nvmf_tcp_icreq_handle ...[2024-06-11 12:51:59.553066] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:40.858 [2024-06-11 12:51:59.553147] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.858 [2024-06-11 12:51:59.553193] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fad10 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.553212] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:40.858 [2024-06-11 12:51:59.553234] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fad10 is same with the state(5) to be set 00:06:40.858 passed 00:06:40.858 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:40.858 Test: test_nvmf_tcp_invalid_sgl ...[2024-06-11 12:51:59.553252] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.858 [2024-06-11 12:51:59.553274] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fad10 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.553295] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:40.858 [2024-06-11 12:51:59.553329] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fad10 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.553372] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:40.858 passed 00:06:40.858 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-06-11 12:51:59.553400] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.858 [2024-06-11 12:51:59.553419] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fad10 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.553481] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc124fba70 00:06:40.858 [2024-06-11 12:51:59.553554] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.858 [2024-06-11 12:51:59.553589] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.553617] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc124fb1d0 00:06:40.858 [2024-06-11 12:51:59.553637] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.858 [2024-06-11 12:51:59.553664] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.553717] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:40.858 [2024-06-11 12:51:59.553744] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.858 [2024-06-11 12:51:59.553776] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.858 [2024-06-11 12:51:59.553803] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:40.858 [2024-06-11 12:51:59.553823] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.859 [2024-06-11 12:51:59.553846] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.859 [2024-06-11 12:51:59.553882] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.859 [2024-06-11 12:51:59.553906] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.859 [2024-06-11 12:51:59.553946] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.859 [2024-06-11 12:51:59.553964] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.859 [2024-06-11 12:51:59.553995] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.859 [2024-06-11 12:51:59.554027] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.859 [2024-06-11 12:51:59.554053] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.859 [2024-06-11 12:51:59.554070] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.859 passed 00:06:40.859 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-06-11 12:51:59.554105] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.859 [2024-06-11 12:51:59.554137] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.859 [2024-06-11 12:51:59.554164] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:40.859 [2024-06-11 12:51:59.554181] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc124fb1d0 is same with the state(5) to be set 00:06:40.859 passed 00:06:40.859 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-06-11 12:51:59.568184] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:40.859 [2024-06-11 12:51:59.568242] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:40.859 passed 00:06:40.859 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-06-11 12:51:59.568409] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:40.859 [2024-06-11 12:51:59.568462] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:40.859 passed 00:06:40.859 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:06:40.859 00:06:40.859 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.859 suites 1 1 n/a 0 0 00:06:40.859 tests 17 17 17 0 0 00:06:40.859 asserts 222 222 222 0 n/a 00:06:40.859 00:06:40.859 Elapsed time = 0.102 seconds 00:06:40.859 [2024-06-11 12:51:59.568563] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:40.859 [2024-06-11 12:51:59.568583] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:40.859 12:51:59 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:40.859 00:06:40.859 00:06:40.859 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.859 http://cunit.sourceforge.net/ 00:06:40.859 00:06:40.859 00:06:40.859 Suite: nvmf 00:06:40.859 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:40.859 00:06:40.859 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.859 suites 1 1 n/a 0 0 00:06:40.859 tests 1 1 1 0 0 00:06:40.859 asserts 17 17 17 0 n/a 00:06:40.859 00:06:40.859 Elapsed time = 0.022 seconds 00:06:41.117 00:06:41.117 real 0m0.407s 00:06:41.117 user 0m0.206s 00:06:41.117 sys 0m0.204s 00:06:41.117 12:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.117 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.117 ************************************ 00:06:41.117 END TEST unittest_nvmf 00:06:41.117 ************************************ 00:06:41.117 12:51:59 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:41.117 12:51:59 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:41.117 12:51:59 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:41.117 12:51:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.117 12:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.117 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.117 ************************************ 00:06:41.117 START TEST unittest_nvmf_rdma 00:06:41.117 ************************************ 00:06:41.117 12:51:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:41.117 00:06:41.117 00:06:41.117 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.117 http://cunit.sourceforge.net/ 00:06:41.117 00:06:41.117 00:06:41.117 Suite: nvmf 00:06:41.117 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-06-11 12:51:59.778812] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:41.117 [2024-06-11 12:51:59.779103] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:41.117 [2024-06-11 12:51:59.779138] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:41.117 passed 00:06:41.117 Test: test_spdk_nvmf_rdma_request_process ...passed 00:06:41.117 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:41.117 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:41.117 Test: test_nvmf_rdma_opts_init ...passed 00:06:41.117 Test: test_nvmf_rdma_request_free_data ...passed 00:06:41.117 Test: test_nvmf_rdma_update_ibv_state ...passed 00:06:41.117 Test: test_nvmf_rdma_resources_create ...[2024-06-11 12:51:59.780146] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:41.117 [2024-06-11 12:51:59.780187] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:41.117 passed 00:06:41.117 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:41.117 Test: test_nvmf_rdma_resize_cq ...[2024-06-11 12:51:59.781197] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:41.117 Using CQ of insufficient size may lead to CQ overrun 00:06:41.117 passed 00:06:41.117 00:06:41.117 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.117 suites 1 1 n/a 0 0 00:06:41.117 tests 10 10 10 0 0 00:06:41.117 asserts 584 584 584 0 n/a 00:06:41.117 00:06:41.117 Elapsed time = 0.003 seconds 00:06:41.117 [2024-06-11 12:51:59.781276] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:41.117 [2024-06-11 12:51:59.781329] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:41.117 00:06:41.117 real 0m0.041s 00:06:41.117 user 0m0.020s 00:06:41.117 sys 0m0.021s 00:06:41.117 12:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.117 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.117 ************************************ 00:06:41.117 END TEST unittest_nvmf_rdma 00:06:41.117 ************************************ 00:06:41.117 12:51:59 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:41.117 12:51:59 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:06:41.117 12:51:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.117 12:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.117 12:51:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.117 ************************************ 00:06:41.117 START TEST unittest_scsi 00:06:41.117 ************************************ 00:06:41.117 12:51:59 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:06:41.117 12:51:59 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:41.117 00:06:41.117 00:06:41.117 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.117 http://cunit.sourceforge.net/ 00:06:41.117 00:06:41.117 00:06:41.117 Suite: dev_suite 00:06:41.117 Test: dev_destruct_null_dev ...passed 00:06:41.117 Test: dev_destruct_zero_luns ...passed 00:06:41.117 Test: dev_destruct_null_lun ...passed 00:06:41.118 Test: dev_destruct_success ...passed 00:06:41.118 Test: dev_construct_num_luns_zero ...[2024-06-11 12:51:59.866967] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:41.118 passed 00:06:41.118 Test: dev_construct_no_lun_zero ...passed 00:06:41.118 Test: dev_construct_null_lun ...[2024-06-11 12:51:59.867287] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:41.118 [2024-06-11 12:51:59.867336] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:41.118 passed 00:06:41.118 Test: dev_construct_name_too_long ...passed 00:06:41.118 Test: dev_construct_success ...passed 00:06:41.118 Test: dev_construct_success_lun_zero_not_first ...passed 00:06:41.118 Test: dev_queue_mgmt_task_success ...[2024-06-11 12:51:59.867373] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:41.118 passed 00:06:41.118 Test: dev_queue_task_success ...passed 00:06:41.118 Test: dev_stop_success ...passed 00:06:41.118 Test: dev_add_port_max_ports ...passed 00:06:41.118 Test: dev_add_port_construct_failure1 ...[2024-06-11 12:51:59.867651] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:41.118 [2024-06-11 12:51:59.867738] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:41.118 passed 00:06:41.118 Test: dev_add_port_construct_failure2 ...[2024-06-11 12:51:59.867813] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:41.118 passed 00:06:41.118 Test: dev_add_port_success1 ...passed 00:06:41.118 Test: dev_add_port_success2 ...passed 00:06:41.118 Test: dev_add_port_success3 ...passed 00:06:41.118 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:41.118 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:41.118 Test: dev_find_port_by_id_success ...passed 00:06:41.118 Test: dev_add_lun_bdev_not_found ...passed 00:06:41.118 Test: dev_add_lun_no_free_lun_id ...[2024-06-11 12:51:59.868146] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:41.118 passed 00:06:41.118 Test: dev_add_lun_success1 ...passed 00:06:41.118 Test: dev_add_lun_success2 ...passed 00:06:41.118 Test: dev_check_pending_tasks ...passed 00:06:41.118 Test: dev_iterate_luns ...passed 00:06:41.118 Test: dev_find_free_lun ...passed 00:06:41.118 00:06:41.118 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.118 suites 1 1 n/a 0 0 00:06:41.118 tests 29 29 29 0 0 00:06:41.118 asserts 97 97 97 0 n/a 00:06:41.118 00:06:41.118 Elapsed time = 0.002 seconds 00:06:41.118 12:51:59 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:41.118 00:06:41.118 00:06:41.118 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.118 http://cunit.sourceforge.net/ 00:06:41.118 00:06:41.118 00:06:41.118 Suite: lun_suite 00:06:41.118 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:06:41.118 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-06-11 12:51:59.903961] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:41.118 passed 00:06:41.118 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:41.118 Test: lun_task_mgmt_execute_target_reset ...[2024-06-11 12:51:59.904247] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:41.118 passed 00:06:41.118 Test: lun_task_mgmt_execute_invalid_case ...passed 00:06:41.118 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:41.118 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:41.118 Test: lun_append_task_null_lun_not_supported ...passed 00:06:41.118 Test: lun_execute_scsi_task_pending ...[2024-06-11 12:51:59.904395] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:41.118 passed 00:06:41.118 Test: lun_execute_scsi_task_complete ...passed 00:06:41.118 Test: lun_execute_scsi_task_resize ...passed 00:06:41.118 Test: lun_destruct_success ...passed 00:06:41.118 Test: lun_construct_null_ctx ...passed 00:06:41.118 Test: lun_construct_success ...passed 00:06:41.118 Test: lun_reset_task_wait_scsi_task_complete ...[2024-06-11 12:51:59.904524] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:41.118 passed 00:06:41.118 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:41.118 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:06:41.118 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:41.118 00:06:41.118 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.118 suites 1 1 n/a 0 0 00:06:41.118 tests 18 18 18 0 0 00:06:41.118 asserts 153 153 153 0 n/a 00:06:41.118 00:06:41.118 Elapsed time = 0.001 seconds 00:06:41.118 12:51:59 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:41.118 00:06:41.118 00:06:41.118 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.118 http://cunit.sourceforge.net/ 00:06:41.118 00:06:41.118 00:06:41.118 Suite: scsi_suite 00:06:41.118 Test: scsi_init ...passed 00:06:41.118 00:06:41.118 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.118 suites 1 1 n/a 0 0 00:06:41.118 tests 1 1 1 0 0 00:06:41.118 asserts 1 1 1 0 n/a 00:06:41.118 00:06:41.118 Elapsed time = 0.000 seconds 00:06:41.377 12:51:59 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:41.377 00:06:41.377 00:06:41.377 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.377 http://cunit.sourceforge.net/ 00:06:41.377 00:06:41.377 00:06:41.377 Suite: translation_suite 00:06:41.377 Test: mode_select_6_test ...passed 00:06:41.377 Test: mode_select_6_test2 ...passed 00:06:41.377 Test: mode_sense_6_test ...passed 00:06:41.377 Test: mode_sense_10_test ...passed 00:06:41.377 Test: inquiry_evpd_test ...passed 00:06:41.377 Test: inquiry_standard_test ...passed 00:06:41.377 Test: inquiry_overflow_test ...passed 00:06:41.377 Test: task_complete_test ...passed 00:06:41.377 Test: lba_range_test ...passed 00:06:41.377 Test: xfer_len_test ...[2024-06-11 12:51:59.966704] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:41.377 passed 00:06:41.377 Test: xfer_test ...passed 00:06:41.377 Test: scsi_name_padding_test ...passed 00:06:41.377 Test: get_dif_ctx_test ...passed 00:06:41.377 Test: unmap_split_test ...passed 00:06:41.377 00:06:41.377 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.377 suites 1 1 n/a 0 0 00:06:41.377 tests 14 14 14 0 0 00:06:41.377 asserts 1200 1200 1200 0 n/a 00:06:41.377 00:06:41.377 Elapsed time = 0.003 seconds 00:06:41.377 12:51:59 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:41.377 00:06:41.377 00:06:41.377 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.377 http://cunit.sourceforge.net/ 00:06:41.377 00:06:41.377 00:06:41.377 Suite: reservation_suite 00:06:41.377 Test: test_reservation_register ...[2024-06-11 12:51:59.996867] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:41.377 passed 00:06:41.377 Test: test_reservation_reserve ...[2024-06-11 12:51:59.997332] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:41.377 [2024-06-11 12:51:59.997447] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:41.377 [2024-06-11 12:51:59.997577] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:41.377 passed 00:06:41.377 Test: test_reservation_preempt_non_all_regs ...[2024-06-11 12:51:59.997708] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:41.377 [2024-06-11 12:51:59.997800] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:41.377 passed 00:06:41.377 Test: test_reservation_preempt_all_regs ...passed 00:06:41.377 Test: test_reservation_cmds_conflict ...[2024-06-11 12:51:59.997969] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:41.377 [2024-06-11 12:51:59.998138] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:41.377 [2024-06-11 12:51:59.998222] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:41.377 [2024-06-11 12:51:59.998285] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:41.377 [2024-06-11 12:51:59.998318] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:41.377 [2024-06-11 12:51:59.998367] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:41.377 [2024-06-11 12:51:59.998397] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:41.377 passed 00:06:41.377 Test: test_scsi2_reserve_release ...passed 00:06:41.377 Test: test_pr_with_scsi2_reserve_release ...passed 00:06:41.377 00:06:41.377 [2024-06-11 12:51:59.998524] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:41.377 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.377 suites 1 1 n/a 0 0 00:06:41.377 tests 7 7 7 0 0 00:06:41.377 asserts 257 257 257 0 n/a 00:06:41.377 00:06:41.377 Elapsed time = 0.002 seconds 00:06:41.377 00:06:41.377 real 0m0.158s 00:06:41.377 user 0m0.087s 00:06:41.377 sys 0m0.072s 00:06:41.378 12:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.378 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.378 ************************************ 00:06:41.378 END TEST unittest_scsi 00:06:41.378 ************************************ 00:06:41.378 12:52:00 -- unit/unittest.sh@276 -- # uname -s 00:06:41.378 12:52:00 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:06:41.378 12:52:00 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:06:41.378 12:52:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.378 12:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.378 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.378 ************************************ 00:06:41.378 START TEST unittest_sock 00:06:41.378 ************************************ 00:06:41.378 12:52:00 -- common/autotest_common.sh@1104 -- # unittest_sock 00:06:41.378 12:52:00 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:41.378 00:06:41.378 00:06:41.378 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.378 http://cunit.sourceforge.net/ 00:06:41.378 00:06:41.378 00:06:41.378 Suite: sock 00:06:41.378 Test: posix_sock ...passed 00:06:41.378 Test: ut_sock ...passed 00:06:41.378 Test: posix_sock_group ...passed 00:06:41.378 Test: ut_sock_group ...passed 00:06:41.378 Test: posix_sock_group_fairness ...passed 00:06:41.378 Test: _posix_sock_close ...passed 00:06:41.378 Test: sock_get_default_opts ...passed 00:06:41.378 Test: ut_sock_impl_get_set_opts ...passed 00:06:41.378 Test: posix_sock_impl_get_set_opts ...passed 00:06:41.378 Test: ut_sock_map ...passed 00:06:41.378 Test: override_impl_opts ...passed 00:06:41.378 Test: ut_sock_group_get_ctx ...passed 00:06:41.378 00:06:41.378 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.378 suites 1 1 n/a 0 0 00:06:41.378 tests 12 12 12 0 0 00:06:41.378 asserts 349 349 349 0 n/a 00:06:41.378 00:06:41.378 Elapsed time = 0.007 seconds 00:06:41.378 12:52:00 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:41.378 00:06:41.378 00:06:41.378 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.378 http://cunit.sourceforge.net/ 00:06:41.378 00:06:41.378 00:06:41.378 Suite: posix 00:06:41.378 Test: flush ...passed 00:06:41.378 00:06:41.378 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.378 suites 1 1 n/a 0 0 00:06:41.378 tests 1 1 1 0 0 00:06:41.378 asserts 28 28 28 0 n/a 00:06:41.378 00:06:41.378 Elapsed time = 0.000 seconds 00:06:41.378 12:52:00 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:41.378 ************************************ 00:06:41.378 END TEST unittest_sock 00:06:41.378 ************************************ 00:06:41.378 00:06:41.378 real 0m0.096s 00:06:41.378 user 0m0.045s 00:06:41.378 sys 0m0.027s 00:06:41.378 12:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.378 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.378 12:52:00 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:41.378 12:52:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.378 12:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.378 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.378 ************************************ 00:06:41.378 START TEST unittest_thread 00:06:41.378 ************************************ 00:06:41.378 12:52:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:41.637 00:06:41.637 00:06:41.637 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.637 http://cunit.sourceforge.net/ 00:06:41.637 00:06:41.637 00:06:41.637 Suite: io_channel 00:06:41.637 Test: thread_alloc ...passed 00:06:41.637 Test: thread_send_msg ...passed 00:06:41.637 Test: thread_poller ...passed 00:06:41.637 Test: poller_pause ...passed 00:06:41.637 Test: thread_for_each ...passed 00:06:41.637 Test: for_each_channel_remove ...passed 00:06:41.637 Test: for_each_channel_unreg ...[2024-06-11 12:52:00.241201] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffe9df5e060 already registered (old:0x613000000200 new:0x6130000003c0) 00:06:41.637 passed 00:06:41.637 Test: thread_name ...passed 00:06:41.637 Test: channel ...[2024-06-11 12:52:00.245793] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x558638a090e0 00:06:41.637 passed 00:06:41.637 Test: channel_destroy_races ...passed 00:06:41.637 Test: thread_exit_test ...[2024-06-11 12:52:00.251222] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:06:41.637 passed 00:06:41.637 Test: thread_update_stats_test ...passed 00:06:41.637 Test: nested_channel ...passed 00:06:41.637 Test: device_unregister_and_thread_exit_race ...passed 00:06:41.637 Test: cache_closest_timed_poller ...passed 00:06:41.637 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:41.637 Test: io_device_lookup ...passed 00:06:41.637 Test: spdk_spin ...[2024-06-11 12:52:00.263471] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:41.637 [2024-06-11 12:52:00.263559] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe9df5e050 00:06:41.637 [2024-06-11 12:52:00.263831] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:41.637 [2024-06-11 12:52:00.265596] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:41.637 [2024-06-11 12:52:00.265792] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe9df5e050 00:06:41.637 [2024-06-11 12:52:00.265943] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:41.637 [2024-06-11 12:52:00.266124] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe9df5e050 00:06:41.637 [2024-06-11 12:52:00.266275] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:41.637 [2024-06-11 12:52:00.266442] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe9df5e050 00:06:41.637 [2024-06-11 12:52:00.266587] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:41.637 [2024-06-11 12:52:00.266767] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe9df5e050 00:06:41.637 passed 00:06:41.637 Test: for_each_channel_and_thread_exit_race ...passed 00:06:41.637 Test: for_each_thread_and_thread_exit_race ...passed 00:06:41.637 00:06:41.637 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.637 suites 1 1 n/a 0 0 00:06:41.637 tests 20 20 20 0 0 00:06:41.637 asserts 409 409 409 0 n/a 00:06:41.637 00:06:41.637 Elapsed time = 0.049 seconds 00:06:41.637 00:06:41.637 real 0m0.092s 00:06:41.637 user 0m0.074s 00:06:41.637 sys 0m0.013s 00:06:41.637 12:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.637 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.637 ************************************ 00:06:41.637 END TEST unittest_thread 00:06:41.637 ************************************ 00:06:41.637 12:52:00 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:41.637 12:52:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.637 12:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.637 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.637 ************************************ 00:06:41.637 START TEST unittest_iobuf 00:06:41.637 ************************************ 00:06:41.637 12:52:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:41.637 00:06:41.637 00:06:41.637 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.637 http://cunit.sourceforge.net/ 00:06:41.637 00:06:41.637 00:06:41.637 Suite: io_channel 00:06:41.637 Test: iobuf ...passed 00:06:41.637 Test: iobuf_cache ...[2024-06-11 12:52:00.369660] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:41.637 [2024-06-11 12:52:00.370237] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:41.637 [2024-06-11 12:52:00.370600] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:41.637 [2024-06-11 12:52:00.370822] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:41.637 [2024-06-11 12:52:00.371071] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:41.637 [2024-06-11 12:52:00.371283] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:41.637 passed 00:06:41.637 00:06:41.637 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.637 suites 1 1 n/a 0 0 00:06:41.637 tests 2 2 2 0 0 00:06:41.637 asserts 107 107 107 0 n/a 00:06:41.637 00:06:41.637 Elapsed time = 0.009 seconds 00:06:41.637 00:06:41.637 real 0m0.047s 00:06:41.637 user 0m0.025s 00:06:41.637 sys 0m0.020s 00:06:41.637 12:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.637 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.637 ************************************ 00:06:41.637 END TEST unittest_iobuf 00:06:41.637 ************************************ 00:06:41.637 12:52:00 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:06:41.637 12:52:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.637 12:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.637 12:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.637 ************************************ 00:06:41.637 START TEST unittest_util 00:06:41.637 ************************************ 00:06:41.637 12:52:00 -- common/autotest_common.sh@1104 -- # unittest_util 00:06:41.637 12:52:00 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:41.637 00:06:41.637 00:06:41.637 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.637 http://cunit.sourceforge.net/ 00:06:41.637 00:06:41.637 00:06:41.637 Suite: base64 00:06:41.637 Test: test_base64_get_encoded_strlen ...passed 00:06:41.637 Test: test_base64_get_decoded_len ...passed 00:06:41.637 Test: test_base64_encode ...passed 00:06:41.637 Test: test_base64_decode ...passed 00:06:41.637 Test: test_base64_urlsafe_encode ...passed 00:06:41.637 Test: test_base64_urlsafe_decode ...passed 00:06:41.637 00:06:41.637 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.637 suites 1 1 n/a 0 0 00:06:41.637 tests 6 6 6 0 0 00:06:41.637 asserts 112 112 112 0 n/a 00:06:41.637 00:06:41.637 Elapsed time = 0.000 seconds 00:06:41.896 12:52:00 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:41.896 00:06:41.896 00:06:41.896 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.896 http://cunit.sourceforge.net/ 00:06:41.896 00:06:41.896 00:06:41.896 Suite: bit_array 00:06:41.896 Test: test_1bit ...passed 00:06:41.896 Test: test_64bit ...passed 00:06:41.896 Test: test_find ...passed 00:06:41.896 Test: test_resize ...passed 00:06:41.896 Test: test_errors ...passed 00:06:41.896 Test: test_count ...passed 00:06:41.896 Test: test_mask_store_load ...passed 00:06:41.896 Test: test_mask_clear ...passed 00:06:41.896 00:06:41.896 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.896 suites 1 1 n/a 0 0 00:06:41.896 tests 8 8 8 0 0 00:06:41.896 asserts 5075 5075 5075 0 n/a 00:06:41.896 00:06:41.896 Elapsed time = 0.002 seconds 00:06:41.896 12:52:00 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:41.896 00:06:41.896 00:06:41.896 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.896 http://cunit.sourceforge.net/ 00:06:41.896 00:06:41.896 00:06:41.896 Suite: cpuset 00:06:41.896 Test: test_cpuset ...passed 00:06:41.896 Test: test_cpuset_parse ...[2024-06-11 12:52:00.513682] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:41.896 [2024-06-11 12:52:00.514073] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:41.896 [2024-06-11 12:52:00.514266] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:41.896 [2024-06-11 12:52:00.514436] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:41.896 [2024-06-11 12:52:00.514578] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:41.897 [2024-06-11 12:52:00.514709] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:41.897 [2024-06-11 12:52:00.514867] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:41.897 [2024-06-11 12:52:00.515047] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:41.897 passed 00:06:41.897 Test: test_cpuset_fmt ...passed 00:06:41.897 00:06:41.897 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.897 suites 1 1 n/a 0 0 00:06:41.897 tests 3 3 3 0 0 00:06:41.897 asserts 65 65 65 0 n/a 00:06:41.897 00:06:41.897 Elapsed time = 0.002 seconds 00:06:41.897 12:52:00 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:41.897 00:06:41.897 00:06:41.897 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.897 http://cunit.sourceforge.net/ 00:06:41.897 00:06:41.897 00:06:41.897 Suite: crc16 00:06:41.897 Test: test_crc16_t10dif ...passed 00:06:41.897 Test: test_crc16_t10dif_seed ...passed 00:06:41.897 Test: test_crc16_t10dif_copy ...passed 00:06:41.897 00:06:41.897 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.897 suites 1 1 n/a 0 0 00:06:41.897 tests 3 3 3 0 0 00:06:41.897 asserts 5 5 5 0 n/a 00:06:41.897 00:06:41.897 Elapsed time = 0.000 seconds 00:06:41.897 12:52:00 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:41.897 00:06:41.897 00:06:41.897 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.897 http://cunit.sourceforge.net/ 00:06:41.897 00:06:41.897 00:06:41.897 Suite: crc32_ieee 00:06:41.897 Test: test_crc32_ieee ...passed 00:06:41.897 00:06:41.897 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.897 suites 1 1 n/a 0 0 00:06:41.897 tests 1 1 1 0 0 00:06:41.897 asserts 1 1 1 0 n/a 00:06:41.897 00:06:41.897 Elapsed time = 0.000 seconds 00:06:41.897 12:52:00 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:41.897 00:06:41.897 00:06:41.897 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.897 http://cunit.sourceforge.net/ 00:06:41.897 00:06:41.897 00:06:41.897 Suite: crc32c 00:06:41.897 Test: test_crc32c ...passed 00:06:41.897 Test: test_crc32c_nvme ...passed 00:06:41.897 00:06:41.897 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.897 suites 1 1 n/a 0 0 00:06:41.897 tests 2 2 2 0 0 00:06:41.897 asserts 16 16 16 0 n/a 00:06:41.897 00:06:41.897 Elapsed time = 0.001 seconds 00:06:41.897 12:52:00 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:41.897 00:06:41.897 00:06:41.897 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.897 http://cunit.sourceforge.net/ 00:06:41.897 00:06:41.897 00:06:41.897 Suite: crc64 00:06:41.897 Test: test_crc64_nvme ...passed 00:06:41.897 00:06:41.897 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.897 suites 1 1 n/a 0 0 00:06:41.897 tests 1 1 1 0 0 00:06:41.897 asserts 4 4 4 0 n/a 00:06:41.897 00:06:41.897 Elapsed time = 0.001 seconds 00:06:41.897 12:52:00 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:41.897 00:06:41.897 00:06:41.897 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.897 http://cunit.sourceforge.net/ 00:06:41.897 00:06:41.897 00:06:41.897 Suite: string 00:06:41.897 Test: test_parse_ip_addr ...passed 00:06:41.897 Test: test_str_chomp ...passed 00:06:41.897 Test: test_parse_capacity ...passed 00:06:41.897 Test: test_sprintf_append_realloc ...passed 00:06:41.897 Test: test_strtol ...passed 00:06:41.897 Test: test_strtoll ...passed 00:06:41.897 Test: test_strarray ...passed 00:06:41.897 Test: test_strcpy_replace ...passed 00:06:41.897 00:06:41.897 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.897 suites 1 1 n/a 0 0 00:06:41.897 tests 8 8 8 0 0 00:06:41.897 asserts 161 161 161 0 n/a 00:06:41.897 00:06:41.897 Elapsed time = 0.001 seconds 00:06:41.897 12:52:00 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:41.897 00:06:41.897 00:06:41.897 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.897 http://cunit.sourceforge.net/ 00:06:41.897 00:06:41.897 00:06:41.897 Suite: dif 00:06:41.897 Test: dif_generate_and_verify_test ...[2024-06-11 12:52:00.686144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:41.897 [2024-06-11 12:52:00.686728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:41.897 [2024-06-11 12:52:00.687121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:41.897 [2024-06-11 12:52:00.687512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:41.897 [2024-06-11 12:52:00.687899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:41.897 [2024-06-11 12:52:00.688296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:41.897 passed 00:06:41.897 Test: dif_disable_check_test ...[2024-06-11 12:52:00.689610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:41.897 [2024-06-11 12:52:00.690067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:41.897 [2024-06-11 12:52:00.690453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:41.897 passed 00:06:41.897 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-06-11 12:52:00.691756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:41.897 [2024-06-11 12:52:00.692179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:41.897 [2024-06-11 12:52:00.692593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:41.897 [2024-06-11 12:52:00.693035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:41.897 [2024-06-11 12:52:00.693484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:41.897 [2024-06-11 12:52:00.693909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:41.897 [2024-06-11 12:52:00.694325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:41.897 [2024-06-11 12:52:00.694730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:41.897 [2024-06-11 12:52:00.695133] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:41.897 [2024-06-11 12:52:00.695574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:41.897 [2024-06-11 12:52:00.695995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:41.897 passed 00:06:41.897 Test: dif_apptag_mask_test ...[2024-06-11 12:52:00.696556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:41.897 [2024-06-11 12:52:00.696939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:41.897 passed 00:06:41.897 Test: dif_sec_512_md_0_error_test ...[2024-06-11 12:52:00.697379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:41.897 passed 00:06:41.897 Test: dif_sec_4096_md_0_error_test ...[2024-06-11 12:52:00.697771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:41.897 [2024-06-11 12:52:00.697916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:41.897 passed 00:06:41.897 Test: dif_sec_4100_md_128_error_test ...[2024-06-11 12:52:00.698052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:41.897 [2024-06-11 12:52:00.698120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:41.897 passed 00:06:41.897 Test: dif_guard_seed_test ...passed 00:06:41.897 Test: dif_guard_value_test ...passed 00:06:41.897 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:41.897 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:41.897 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:41.897 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:41.897 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:42.158 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:42.158 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:42.158 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:42.158 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:42.158 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:42.158 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:42.158 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:42.158 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:42.158 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:42.158 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:42.158 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:42.158 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:42.158 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:42.158 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-11 12:52:00.745722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd48, Actual=fd4c 00:06:42.158 [2024-06-11 12:52:00.748256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe25, Actual=fe21 00:06:42.158 [2024-06-11 12:52:00.750813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.753355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.755941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.158 [2024-06-11 12:52:00.758503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.158 [2024-06-11 12:52:00.761038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=2f60 00:06:42.158 [2024-06-11 12:52:00.763478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe21, Actual=2760 00:06:42.158 [2024-06-11 12:52:00.765951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab353ed, Actual=1ab753ed 00:06:42.158 [2024-06-11 12:52:00.768499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38534660, Actual=38574660 00:06:42.158 [2024-06-11 12:52:00.771071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.773609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.776167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.158 [2024-06-11 12:52:00.778714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.158 [2024-06-11 12:52:00.781268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=e55d5311 00:06:42.158 [2024-06-11 12:52:00.783719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574660, Actual=b4a4da3c 00:06:42.158 [2024-06-11 12:52:00.786194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.158 [2024-06-11 12:52:00.788730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:06:42.158 [2024-06-11 12:52:00.791274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.793843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.796376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=65 00:06:42.158 [2024-06-11 12:52:00.798938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=65 00:06:42.158 [2024-06-11 12:52:00.801495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=f07a8b83c588f0bb 00:06:42.158 [2024-06-11 12:52:00.803931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837a266, Actual=47eba87e6f68f679 00:06:42.158 passed 00:06:42.158 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-06-11 12:52:00.805672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:06:42.158 [2024-06-11 12:52:00.806092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:06:42.158 [2024-06-11 12:52:00.806493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.806891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.807306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.158 [2024-06-11 12:52:00.807701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.158 [2024-06-11 12:52:00.808101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2f60 00:06:42.158 [2024-06-11 12:52:00.808395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2760 00:06:42.158 [2024-06-11 12:52:00.808677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:06:42.158 [2024-06-11 12:52:00.809053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:06:42.158 [2024-06-11 12:52:00.809477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.809905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.810346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.158 [2024-06-11 12:52:00.810748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.158 [2024-06-11 12:52:00.811148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e55d5311 00:06:42.158 [2024-06-11 12:52:00.811435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b4a4da3c 00:06:42.158 [2024-06-11 12:52:00.811743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.158 [2024-06-11 12:52:00.812132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:06:42.158 [2024-06-11 12:52:00.812526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.812919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.813314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.158 [2024-06-11 12:52:00.813749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.158 [2024-06-11 12:52:00.814173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f07a8b83c588f0bb 00:06:42.158 [2024-06-11 12:52:00.814471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=47eba87e6f68f679 00:06:42.158 passed 00:06:42.158 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-06-11 12:52:00.814963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:06:42.158 [2024-06-11 12:52:00.815363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:06:42.158 [2024-06-11 12:52:00.815756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.816151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.816558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.158 [2024-06-11 12:52:00.816968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.158 [2024-06-11 12:52:00.817365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2f60 00:06:42.158 [2024-06-11 12:52:00.817700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2760 00:06:42.158 [2024-06-11 12:52:00.818021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:06:42.158 [2024-06-11 12:52:00.818423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:06:42.158 [2024-06-11 12:52:00.818822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.819242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.819663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.158 [2024-06-11 12:52:00.820087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.158 [2024-06-11 12:52:00.820484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e55d5311 00:06:42.158 [2024-06-11 12:52:00.820782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b4a4da3c 00:06:42.158 [2024-06-11 12:52:00.821098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.158 [2024-06-11 12:52:00.821516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:06:42.158 [2024-06-11 12:52:00.821931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.822345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.158 [2024-06-11 12:52:00.822753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.158 [2024-06-11 12:52:00.823148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.158 [2024-06-11 12:52:00.823573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f07a8b83c588f0bb 00:06:42.158 [2024-06-11 12:52:00.823875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=47eba87e6f68f679 00:06:42.158 passed 00:06:42.158 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-06-11 12:52:00.824390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:06:42.158 [2024-06-11 12:52:00.824809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:06:42.159 [2024-06-11 12:52:00.825219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.825631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.826097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.826507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.826911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2f60 00:06:42.159 [2024-06-11 12:52:00.827205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2760 00:06:42.159 [2024-06-11 12:52:00.827494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:06:42.159 [2024-06-11 12:52:00.827889] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:06:42.159 [2024-06-11 12:52:00.828310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.828715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.829112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.829537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.829971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e55d5311 00:06:42.159 [2024-06-11 12:52:00.830292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b4a4da3c 00:06:42.159 [2024-06-11 12:52:00.830603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.159 [2024-06-11 12:52:00.831006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:06:42.159 [2024-06-11 12:52:00.831406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.831808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.832208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.159 [2024-06-11 12:52:00.832611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.159 [2024-06-11 12:52:00.833028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f07a8b83c588f0bb 00:06:42.159 [2024-06-11 12:52:00.833330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=47eba87e6f68f679 00:06:42.159 passed 00:06:42.159 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-06-11 12:52:00.833896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:06:42.159 [2024-06-11 12:52:00.834292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:06:42.159 [2024-06-11 12:52:00.834700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.835100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.835524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.835922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.836321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2f60 00:06:42.159 [2024-06-11 12:52:00.836613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2760 00:06:42.159 passed 00:06:42.159 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-06-11 12:52:00.837124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:06:42.159 [2024-06-11 12:52:00.837532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:06:42.159 [2024-06-11 12:52:00.837970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.838376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.838779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.839176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.839578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e55d5311 00:06:42.159 [2024-06-11 12:52:00.839877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b4a4da3c 00:06:42.159 [2024-06-11 12:52:00.840210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.159 [2024-06-11 12:52:00.840619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:06:42.159 [2024-06-11 12:52:00.841029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.841442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.841857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.159 [2024-06-11 12:52:00.842268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.159 [2024-06-11 12:52:00.842689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f07a8b83c588f0bb 00:06:42.159 [2024-06-11 12:52:00.842996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=47eba87e6f68f679 00:06:42.159 passed 00:06:42.159 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-06-11 12:52:00.843491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:06:42.159 [2024-06-11 12:52:00.843895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:06:42.159 [2024-06-11 12:52:00.844292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.844696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.845117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.845528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.845945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2f60 00:06:42.159 [2024-06-11 12:52:00.846246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2760 00:06:42.159 passed 00:06:42.159 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-06-11 12:52:00.846762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:06:42.159 [2024-06-11 12:52:00.847160] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:06:42.159 [2024-06-11 12:52:00.847602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.848004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.848407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.848807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:06:42.159 [2024-06-11 12:52:00.849213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e55d5311 00:06:42.159 [2024-06-11 12:52:00.849520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=b4a4da3c 00:06:42.159 [2024-06-11 12:52:00.849885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.159 [2024-06-11 12:52:00.850309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:06:42.159 [2024-06-11 12:52:00.850713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.851108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.851509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.159 [2024-06-11 12:52:00.851907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5c 00:06:42.159 [2024-06-11 12:52:00.852329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f07a8b83c588f0bb 00:06:42.159 [2024-06-11 12:52:00.852639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=47eba87e6f68f679 00:06:42.159 passed 00:06:42.159 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:42.159 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:42.159 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:42.159 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:42.159 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:42.159 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:42.159 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:42.159 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:42.159 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:42.159 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-11 12:52:00.898211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd48, Actual=fd4c 00:06:42.159 [2024-06-11 12:52:00.899426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=32e2, Actual=32e6 00:06:42.159 [2024-06-11 12:52:00.900624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.901864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.903088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.159 [2024-06-11 12:52:00.904287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.159 [2024-06-11 12:52:00.905507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=2f60 00:06:42.159 [2024-06-11 12:52:00.906712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=8256 00:06:42.159 [2024-06-11 12:52:00.907921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab353ed, Actual=1ab753ed 00:06:42.159 [2024-06-11 12:52:00.909126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=433fe6, Actual=473fe6 00:06:42.159 [2024-06-11 12:52:00.910360] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.911601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.912804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.159 [2024-06-11 12:52:00.914035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.159 [2024-06-11 12:52:00.915249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=e55d5311 00:06:42.159 [2024-06-11 12:52:00.916461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=89fe0463 00:06:42.159 [2024-06-11 12:52:00.917675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.159 [2024-06-11 12:52:00.918937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=27c028b22b51fb04, Actual=27c028b22b55fb04 00:06:42.159 [2024-06-11 12:52:00.920146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.921365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.922607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=65 00:06:42.159 [2024-06-11 12:52:00.923819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=65 00:06:42.159 [2024-06-11 12:52:00.925021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=f07a8b83c588f0bb 00:06:42.159 [2024-06-11 12:52:00.926276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=f323a0d5b5cf848d 00:06:42.159 passed 00:06:42.159 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-11 12:52:00.926797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:06:42.159 [2024-06-11 12:52:00.927180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:06:42.159 [2024-06-11 12:52:00.927554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.927928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.928323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:06:42.159 [2024-06-11 12:52:00.928719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:06:42.159 [2024-06-11 12:52:00.929092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=2f60 00:06:42.159 [2024-06-11 12:52:00.929478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=4b81 00:06:42.159 [2024-06-11 12:52:00.929861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab353ed, Actual=1ab753ed 00:06:42.159 [2024-06-11 12:52:00.930241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9ef3f7b4, Actual=9ef7f7b4 00:06:42.159 [2024-06-11 12:52:00.930628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.931000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.931383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:06:42.159 [2024-06-11 12:52:00.931760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:06:42.159 [2024-06-11 12:52:00.932126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=e55d5311 00:06:42.159 [2024-06-11 12:52:00.932498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=174ecc31 00:06:42.159 [2024-06-11 12:52:00.932885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.159 [2024-06-11 12:52:00.933252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5321ca5ff0851ef9, Actual=5321ca5ff0811ef9 00:06:42.159 [2024-06-11 12:52:00.933638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.934016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:06:42.159 [2024-06-11 12:52:00.934392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:06:42.159 [2024-06-11 12:52:00.934758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:06:42.159 [2024-06-11 12:52:00.935141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=f07a8b83c588f0bb 00:06:42.159 [2024-06-11 12:52:00.935516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=87c242386e1b6170 00:06:42.159 passed 00:06:42.159 Test: dix_sec_512_md_0_error ...[2024-06-11 12:52:00.935824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:42.159 passed 00:06:42.159 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:06:42.159 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:42.160 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:42.160 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:42.160 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:42.160 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:42.160 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:42.160 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:42.160 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:42.160 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-11 12:52:00.981095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd48, Actual=fd4c 00:06:42.160 [2024-06-11 12:52:00.982330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=32e2, Actual=32e6 00:06:42.160 [2024-06-11 12:52:00.983535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.160 [2024-06-11 12:52:00.984730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.160 [2024-06-11 12:52:00.985972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.160 [2024-06-11 12:52:00.987183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.160 [2024-06-11 12:52:00.988560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=2f60 00:06:42.160 [2024-06-11 12:52:00.989795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=8256 00:06:42.160 [2024-06-11 12:52:00.990994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab353ed, Actual=1ab753ed 00:06:42.160 [2024-06-11 12:52:00.992190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=433fe6, Actual=473fe6 00:06:42.419 [2024-06-11 12:52:00.993545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.419 [2024-06-11 12:52:00.994764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.419 [2024-06-11 12:52:00.996059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.419 [2024-06-11 12:52:00.997356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=40061 00:06:42.419 [2024-06-11 12:52:00.998587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=e55d5311 00:06:42.419 [2024-06-11 12:52:00.999941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=89fe0463 00:06:42.419 [2024-06-11 12:52:01.001167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:06:42.419 [2024-06-11 12:52:01.002387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=27c028b22b51fb04, Actual=27c028b22b55fb04 00:06:42.419 [2024-06-11 12:52:01.003760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.419 [2024-06-11 12:52:01.004960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8c 00:06:42.419 [2024-06-11 12:52:01.006274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=25a 00:06:42.419 [2024-06-11 12:52:01.007498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=25a 00:06:42.419 [2024-06-11 12:52:01.008703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=9eea84f3a9fdb5b2 00:06:42.419 [2024-06-11 12:52:01.010054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=9bb0da30a34e6c40 00:06:42.419 passed 00:06:42.419 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-11 12:52:01.010673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:06:42.419 [2024-06-11 12:52:01.011043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:06:42.419 [2024-06-11 12:52:01.011416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:06:42.419 [2024-06-11 12:52:01.011787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:06:42.419 [2024-06-11 12:52:01.012181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:06:42.419 [2024-06-11 12:52:01.012548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:06:42.419 [2024-06-11 12:52:01.013007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4c59 00:06:42.419 [2024-06-11 12:52:01.013357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=7f3 00:06:42.419 [2024-06-11 12:52:01.013783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab751ed, Actual=1ab753ed 00:06:42.419 [2024-06-11 12:52:01.014176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=442d43ce, Actual=442d41ce 00:06:42.419 [2024-06-11 12:52:01.014563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:06:42.419 [2024-06-11 12:52:01.014938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:06:42.419 [2024-06-11 12:52:01.015309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:06:42.419 [2024-06-11 12:52:01.015680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:06:42.419 [2024-06-11 12:52:01.016042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e97c3497 00:06:42.419 [2024-06-11 12:52:01.016501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=8299a55b 00:06:42.419 [2024-06-11 12:52:01.016868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc22d3, Actual=a576a7728ecc20d3 00:06:42.419 [2024-06-11 12:52:01.017246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c95e9b9012ad2f56, Actual=c95e9b9012ad2d56 00:06:42.419 [2024-06-11 12:52:01.017627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:06:42.419 [2024-06-11 12:52:01.018008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:06:42.419 [2024-06-11 12:52:01.018376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:06:42.419 [2024-06-11 12:52:01.018744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:06:42.419 [2024-06-11 12:52:01.019112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=9eea84f3a9fdb5b2 00:06:42.419 [2024-06-11 12:52:01.019475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7b2d4ed09c7267da 00:06:42.419 passed 00:06:42.419 Test: set_md_interleave_iovs_test ...passed 00:06:42.419 Test: set_md_interleave_iovs_split_test ...passed 00:06:42.419 Test: dif_generate_stream_pi_16_test ...passed 00:06:42.419 Test: dif_generate_stream_test ...passed 00:06:42.419 Test: set_md_interleave_iovs_alignment_test ...[2024-06-11 12:52:01.027771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:42.419 passed 00:06:42.419 Test: dif_generate_split_test ...passed 00:06:42.419 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:42.419 Test: dif_verify_split_test ...passed 00:06:42.419 Test: dif_verify_stream_multi_segments_test ...passed 00:06:42.419 Test: update_crc32c_pi_16_test ...passed 00:06:42.419 Test: update_crc32c_test ...passed 00:06:42.419 Test: dif_update_crc32c_split_test ...passed 00:06:42.419 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:42.419 Test: get_range_with_md_test ...passed 00:06:42.419 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:42.419 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:42.419 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:42.419 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:42.420 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:42.420 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:42.420 Test: dif_generate_and_verify_unmap_test ...passed 00:06:42.420 00:06:42.420 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.420 suites 1 1 n/a 0 0 00:06:42.420 tests 79 79 79 0 0 00:06:42.420 asserts 3584 3584 3584 0 n/a 00:06:42.420 00:06:42.420 Elapsed time = 0.354 seconds 00:06:42.420 12:52:01 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:42.420 00:06:42.420 00:06:42.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.420 http://cunit.sourceforge.net/ 00:06:42.420 00:06:42.420 00:06:42.420 Suite: iov 00:06:42.420 Test: test_single_iov ...passed 00:06:42.420 Test: test_simple_iov ...passed 00:06:42.420 Test: test_complex_iov ...passed 00:06:42.420 Test: test_iovs_to_buf ...passed 00:06:42.420 Test: test_buf_to_iovs ...passed 00:06:42.420 Test: test_memset ...passed 00:06:42.420 Test: test_iov_one ...passed 00:06:42.420 Test: test_iov_xfer ...passed 00:06:42.420 00:06:42.420 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.420 suites 1 1 n/a 0 0 00:06:42.420 tests 8 8 8 0 0 00:06:42.420 asserts 156 156 156 0 n/a 00:06:42.420 00:06:42.420 Elapsed time = 0.000 seconds 00:06:42.420 12:52:01 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:42.420 00:06:42.420 00:06:42.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.420 http://cunit.sourceforge.net/ 00:06:42.420 00:06:42.420 00:06:42.420 Suite: math 00:06:42.420 Test: test_serial_number_arithmetic ...passed 00:06:42.420 Suite: erase 00:06:42.420 Test: test_memset_s ...passed 00:06:42.420 00:06:42.420 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.420 suites 2 2 n/a 0 0 00:06:42.420 tests 2 2 2 0 0 00:06:42.420 asserts 18 18 18 0 n/a 00:06:42.420 00:06:42.420 Elapsed time = 0.000 seconds 00:06:42.420 12:52:01 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:42.420 00:06:42.420 00:06:42.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.420 http://cunit.sourceforge.net/ 00:06:42.420 00:06:42.420 00:06:42.420 Suite: pipe 00:06:42.420 Test: test_create_destroy ...passed 00:06:42.420 Test: test_write_get_buffer ...passed 00:06:42.420 Test: test_write_advance ...passed 00:06:42.420 Test: test_read_get_buffer ...passed 00:06:42.420 Test: test_read_advance ...passed 00:06:42.420 Test: test_data ...passed 00:06:42.420 00:06:42.420 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.420 suites 1 1 n/a 0 0 00:06:42.420 tests 6 6 6 0 0 00:06:42.420 asserts 250 250 250 0 n/a 00:06:42.420 00:06:42.420 Elapsed time = 0.000 seconds 00:06:42.420 12:52:01 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:42.420 00:06:42.420 00:06:42.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.420 http://cunit.sourceforge.net/ 00:06:42.420 00:06:42.420 00:06:42.420 Suite: xor 00:06:42.420 Test: test_xor_gen ...passed 00:06:42.420 00:06:42.420 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.420 suites 1 1 n/a 0 0 00:06:42.420 tests 1 1 1 0 0 00:06:42.420 asserts 17 17 17 0 n/a 00:06:42.420 00:06:42.420 Elapsed time = 0.007 seconds 00:06:42.420 00:06:42.420 real 0m0.784s 00:06:42.420 user 0m0.526s 00:06:42.420 sys 0m0.209s 00:06:42.420 12:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.420 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.420 ************************************ 00:06:42.420 END TEST unittest_util 00:06:42.420 ************************************ 00:06:42.679 12:52:01 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:42.679 12:52:01 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:42.679 12:52:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.679 12:52:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.679 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 ************************************ 00:06:42.679 START TEST unittest_vhost 00:06:42.679 ************************************ 00:06:42.679 12:52:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:42.679 00:06:42.679 00:06:42.679 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.679 http://cunit.sourceforge.net/ 00:06:42.679 00:06:42.679 00:06:42.679 Suite: vhost_suite 00:06:42.679 Test: desc_to_iov_test ...[2024-06-11 12:52:01.300368] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:42.679 passed 00:06:42.679 Test: create_controller_test ...[2024-06-11 12:52:01.304874] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:42.679 [2024-06-11 12:52:01.305105] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:42.679 [2024-06-11 12:52:01.305265] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:42.679 [2024-06-11 12:52:01.305489] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:42.679 [2024-06-11 12:52:01.305670] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:42.679 [2024-06-11 12:52:01.305942] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-06-11 12:52:01.307013] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:42.679 passed 00:06:42.679 Test: session_find_by_vid_test ...passed 00:06:42.679 Test: remove_controller_test ...[2024-06-11 12:52:01.309347] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:42.679 passed 00:06:42.679 Test: vq_avail_ring_get_test ...passed 00:06:42.679 Test: vq_packed_ring_test ...passed 00:06:42.679 Test: vhost_blk_construct_test ...passed 00:06:42.679 00:06:42.679 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.679 suites 1 1 n/a 0 0 00:06:42.679 tests 7 7 7 0 0 00:06:42.679 asserts 145 145 145 0 n/a 00:06:42.679 00:06:42.679 Elapsed time = 0.012 seconds 00:06:42.679 00:06:42.679 real 0m0.052s 00:06:42.679 user 0m0.029s 00:06:42.679 sys 0m0.021s 00:06:42.679 12:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.679 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 ************************************ 00:06:42.679 END TEST unittest_vhost 00:06:42.679 ************************************ 00:06:42.679 12:52:01 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:42.679 12:52:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.679 12:52:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.679 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 ************************************ 00:06:42.679 START TEST unittest_dma 00:06:42.679 ************************************ 00:06:42.679 12:52:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:42.679 00:06:42.679 00:06:42.679 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.679 http://cunit.sourceforge.net/ 00:06:42.679 00:06:42.679 00:06:42.679 Suite: dma_suite 00:06:42.679 Test: test_dma ...[2024-06-11 12:52:01.397453] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:42.679 passed 00:06:42.679 00:06:42.679 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.679 suites 1 1 n/a 0 0 00:06:42.679 tests 1 1 1 0 0 00:06:42.679 asserts 50 50 50 0 n/a 00:06:42.679 00:06:42.679 Elapsed time = 0.000 seconds 00:06:42.679 00:06:42.679 real 0m0.030s 00:06:42.679 user 0m0.016s 00:06:42.679 sys 0m0.015s 00:06:42.679 12:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.679 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 ************************************ 00:06:42.679 END TEST unittest_dma 00:06:42.679 ************************************ 00:06:42.679 12:52:01 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:06:42.679 12:52:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.679 12:52:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.679 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 ************************************ 00:06:42.679 START TEST unittest_init 00:06:42.679 ************************************ 00:06:42.679 12:52:01 -- common/autotest_common.sh@1104 -- # unittest_init 00:06:42.679 12:52:01 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:42.679 00:06:42.679 00:06:42.679 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.679 http://cunit.sourceforge.net/ 00:06:42.679 00:06:42.679 00:06:42.679 Suite: subsystem_suite 00:06:42.679 Test: subsystem_sort_test_depends_on_single ...passed 00:06:42.679 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:42.679 Test: subsystem_sort_test_missing_dependency ...[2024-06-11 12:52:01.480901] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:42.679 [2024-06-11 12:52:01.481280] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:42.679 passed 00:06:42.679 00:06:42.679 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.679 suites 1 1 n/a 0 0 00:06:42.680 tests 3 3 3 0 0 00:06:42.680 asserts 20 20 20 0 n/a 00:06:42.680 00:06:42.680 Elapsed time = 0.001 seconds 00:06:42.680 00:06:42.680 real 0m0.037s 00:06:42.680 user 0m0.021s 00:06:42.680 sys 0m0.015s 00:06:42.680 12:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.680 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.680 ************************************ 00:06:42.680 END TEST unittest_init 00:06:42.680 ************************************ 00:06:42.938 12:52:01 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:06:42.938 12:52:01 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:42.938 12:52:01 -- unit/unittest.sh@290 -- # hostname 00:06:42.938 12:52:01 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:42.938 geninfo: WARNING: invalid characters removed from testname! 00:07:09.470 12:52:26 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:12.751 12:52:31 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:15.278 12:52:33 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:17.817 12:52:36 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:20.351 12:52:39 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:22.882 12:52:41 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:25.417 12:52:44 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:27.990 12:52:46 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:27.990 12:52:46 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:28.249 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:28.249 Found 309 entries. 00:07:28.249 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:28.249 Writing .css and .png files. 00:07:28.249 Generating output. 00:07:28.249 Processing file include/linux/virtio_ring.h 00:07:28.815 Processing file include/spdk/trace.h 00:07:28.815 Processing file include/spdk/nvme_spec.h 00:07:28.815 Processing file include/spdk/nvmf_transport.h 00:07:28.815 Processing file include/spdk/endian.h 00:07:28.815 Processing file include/spdk/mmio.h 00:07:28.815 Processing file include/spdk/nvme.h 00:07:28.815 Processing file include/spdk/base64.h 00:07:28.815 Processing file include/spdk/bdev_module.h 00:07:28.815 Processing file include/spdk/util.h 00:07:28.815 Processing file include/spdk/histogram_data.h 00:07:28.815 Processing file include/spdk/thread.h 00:07:28.815 Processing file include/spdk_internal/utf.h 00:07:28.815 Processing file include/spdk_internal/rdma.h 00:07:28.815 Processing file include/spdk_internal/nvme_tcp.h 00:07:28.815 Processing file include/spdk_internal/sgl.h 00:07:28.815 Processing file include/spdk_internal/virtio.h 00:07:28.815 Processing file include/spdk_internal/sock.h 00:07:29.074 Processing file lib/accel/accel_sw.c 00:07:29.074 Processing file lib/accel/accel_rpc.c 00:07:29.074 Processing file lib/accel/accel.c 00:07:29.333 Processing file lib/bdev/bdev.c 00:07:29.333 Processing file lib/bdev/part.c 00:07:29.333 Processing file lib/bdev/bdev_zone.c 00:07:29.333 Processing file lib/bdev/bdev_rpc.c 00:07:29.333 Processing file lib/bdev/scsi_nvme.c 00:07:29.592 Processing file lib/blob/blobstore.c 00:07:29.592 Processing file lib/blob/request.c 00:07:29.592 Processing file lib/blob/blobstore.h 00:07:29.592 Processing file lib/blob/zeroes.c 00:07:29.592 Processing file lib/blob/blob_bs_dev.c 00:07:29.592 Processing file lib/blobfs/blobfs.c 00:07:29.592 Processing file lib/blobfs/tree.c 00:07:29.592 Processing file lib/conf/conf.c 00:07:29.852 Processing file lib/dma/dma.c 00:07:30.111 Processing file lib/env_dpdk/pci_virtio.c 00:07:30.111 Processing file lib/env_dpdk/env.c 00:07:30.111 Processing file lib/env_dpdk/init.c 00:07:30.111 Processing file lib/env_dpdk/pci.c 00:07:30.111 Processing file lib/env_dpdk/threads.c 00:07:30.111 Processing file lib/env_dpdk/pci_vmd.c 00:07:30.111 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:30.111 Processing file lib/env_dpdk/pci_event.c 00:07:30.111 Processing file lib/env_dpdk/pci_idxd.c 00:07:30.111 Processing file lib/env_dpdk/sigbus_handler.c 00:07:30.111 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:30.111 Processing file lib/env_dpdk/pci_dpdk.c 00:07:30.111 Processing file lib/env_dpdk/pci_ioat.c 00:07:30.111 Processing file lib/env_dpdk/memory.c 00:07:30.111 Processing file lib/event/app_rpc.c 00:07:30.111 Processing file lib/event/scheduler_static.c 00:07:30.111 Processing file lib/event/log_rpc.c 00:07:30.111 Processing file lib/event/app.c 00:07:30.111 Processing file lib/event/reactor.c 00:07:30.678 Processing file lib/ftl/ftl_nv_cache.h 00:07:30.678 Processing file lib/ftl/ftl_reloc.c 00:07:30.678 Processing file lib/ftl/ftl_writer.c 00:07:30.678 Processing file lib/ftl/ftl_init.c 00:07:30.678 Processing file lib/ftl/ftl_layout.c 00:07:30.678 Processing file lib/ftl/ftl_debug.h 00:07:30.678 Processing file lib/ftl/ftl_band.h 00:07:30.678 Processing file lib/ftl/ftl_l2p.c 00:07:30.678 Processing file lib/ftl/ftl_p2l.c 00:07:30.678 Processing file lib/ftl/ftl_writer.h 00:07:30.678 Processing file lib/ftl/ftl_l2p_cache.c 00:07:30.678 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:30.678 Processing file lib/ftl/ftl_io.h 00:07:30.678 Processing file lib/ftl/ftl_debug.c 00:07:30.678 Processing file lib/ftl/ftl_trace.c 00:07:30.678 Processing file lib/ftl/ftl_core.h 00:07:30.678 Processing file lib/ftl/ftl_nv_cache.c 00:07:30.678 Processing file lib/ftl/ftl_band.c 00:07:30.678 Processing file lib/ftl/ftl_io.c 00:07:30.678 Processing file lib/ftl/ftl_rq.c 00:07:30.678 Processing file lib/ftl/ftl_l2p_flat.c 00:07:30.679 Processing file lib/ftl/ftl_band_ops.c 00:07:30.679 Processing file lib/ftl/ftl_sb.c 00:07:30.679 Processing file lib/ftl/ftl_core.c 00:07:30.679 Processing file lib/ftl/base/ftl_base_dev.c 00:07:30.679 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:30.937 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:30.937 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:30.937 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:30.937 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:30.938 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:30.938 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:30.938 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:31.197 Processing file lib/ftl/utils/ftl_md.c 00:07:31.197 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:31.197 Processing file lib/ftl/utils/ftl_df.h 00:07:31.197 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:31.197 Processing file lib/ftl/utils/ftl_property.h 00:07:31.197 Processing file lib/ftl/utils/ftl_conf.c 00:07:31.197 Processing file lib/ftl/utils/ftl_property.c 00:07:31.197 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:31.197 Processing file lib/ftl/utils/ftl_mempool.c 00:07:31.197 Processing file lib/idxd/idxd_internal.h 00:07:31.197 Processing file lib/idxd/idxd_user.c 00:07:31.197 Processing file lib/idxd/idxd.c 00:07:31.455 Processing file lib/init/subsystem.c 00:07:31.455 Processing file lib/init/subsystem_rpc.c 00:07:31.455 Processing file lib/init/rpc.c 00:07:31.455 Processing file lib/init/json_config.c 00:07:31.455 Processing file lib/ioat/ioat.c 00:07:31.455 Processing file lib/ioat/ioat_internal.h 00:07:31.714 Processing file lib/iscsi/iscsi.h 00:07:31.714 Processing file lib/iscsi/portal_grp.c 00:07:31.714 Processing file lib/iscsi/md5.c 00:07:31.714 Processing file lib/iscsi/conn.c 00:07:31.714 Processing file lib/iscsi/iscsi.c 00:07:31.714 Processing file lib/iscsi/init_grp.c 00:07:31.714 Processing file lib/iscsi/tgt_node.c 00:07:31.714 Processing file lib/iscsi/task.h 00:07:31.714 Processing file lib/iscsi/param.c 00:07:31.714 Processing file lib/iscsi/task.c 00:07:31.714 Processing file lib/iscsi/iscsi_rpc.c 00:07:31.714 Processing file lib/iscsi/iscsi_subsystem.c 00:07:31.973 Processing file lib/json/json_parse.c 00:07:31.973 Processing file lib/json/json_util.c 00:07:31.973 Processing file lib/json/json_write.c 00:07:31.973 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:31.973 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:31.973 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:31.973 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:31.973 Processing file lib/log/log.c 00:07:31.973 Processing file lib/log/log_deprecated.c 00:07:31.973 Processing file lib/log/log_flags.c 00:07:32.233 Processing file lib/lvol/lvol.c 00:07:32.233 Processing file lib/nbd/nbd.c 00:07:32.233 Processing file lib/nbd/nbd_rpc.c 00:07:32.233 Processing file lib/notify/notify.c 00:07:32.233 Processing file lib/notify/notify_rpc.c 00:07:32.801 Processing file lib/nvme/nvme_ctrlr.c 00:07:32.801 Processing file lib/nvme/nvme_poll_group.c 00:07:32.801 Processing file lib/nvme/nvme_pcie_common.c 00:07:32.801 Processing file lib/nvme/nvme_discovery.c 00:07:32.801 Processing file lib/nvme/nvme_quirks.c 00:07:32.801 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:32.801 Processing file lib/nvme/nvme_transport.c 00:07:32.802 Processing file lib/nvme/nvme.c 00:07:32.802 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:32.802 Processing file lib/nvme/nvme_internal.h 00:07:32.802 Processing file lib/nvme/nvme_qpair.c 00:07:32.802 Processing file lib/nvme/nvme_rdma.c 00:07:32.802 Processing file lib/nvme/nvme_tcp.c 00:07:32.802 Processing file lib/nvme/nvme_vfio_user.c 00:07:32.802 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:32.802 Processing file lib/nvme/nvme_pcie_internal.h 00:07:32.802 Processing file lib/nvme/nvme_cuse.c 00:07:32.802 Processing file lib/nvme/nvme_io_msg.c 00:07:32.802 Processing file lib/nvme/nvme_ns_cmd.c 00:07:32.802 Processing file lib/nvme/nvme_ns.c 00:07:32.802 Processing file lib/nvme/nvme_opal.c 00:07:32.802 Processing file lib/nvme/nvme_pcie.c 00:07:32.802 Processing file lib/nvme/nvme_fabric.c 00:07:32.802 Processing file lib/nvme/nvme_zns.c 00:07:33.369 Processing file lib/nvmf/transport.c 00:07:33.369 Processing file lib/nvmf/ctrlr_discovery.c 00:07:33.369 Processing file lib/nvmf/rdma.c 00:07:33.369 Processing file lib/nvmf/tcp.c 00:07:33.369 Processing file lib/nvmf/nvmf_rpc.c 00:07:33.369 Processing file lib/nvmf/ctrlr.c 00:07:33.369 Processing file lib/nvmf/nvmf_internal.h 00:07:33.369 Processing file lib/nvmf/subsystem.c 00:07:33.369 Processing file lib/nvmf/nvmf.c 00:07:33.369 Processing file lib/nvmf/ctrlr_bdev.c 00:07:33.369 Processing file lib/rdma/common.c 00:07:33.369 Processing file lib/rdma/rdma_verbs.c 00:07:33.369 Processing file lib/rpc/rpc.c 00:07:33.628 Processing file lib/scsi/scsi_rpc.c 00:07:33.628 Processing file lib/scsi/port.c 00:07:33.628 Processing file lib/scsi/dev.c 00:07:33.628 Processing file lib/scsi/scsi_bdev.c 00:07:33.628 Processing file lib/scsi/scsi.c 00:07:33.628 Processing file lib/scsi/scsi_pr.c 00:07:33.628 Processing file lib/scsi/task.c 00:07:33.628 Processing file lib/scsi/lun.c 00:07:33.887 Processing file lib/sock/sock.c 00:07:33.887 Processing file lib/sock/sock_rpc.c 00:07:33.887 Processing file lib/thread/iobuf.c 00:07:33.887 Processing file lib/thread/thread.c 00:07:33.887 Processing file lib/trace/trace_rpc.c 00:07:33.887 Processing file lib/trace/trace_flags.c 00:07:33.887 Processing file lib/trace/trace.c 00:07:34.145 Processing file lib/trace_parser/trace.cpp 00:07:34.145 Processing file lib/ut/ut.c 00:07:34.145 Processing file lib/ut_mock/mock.c 00:07:34.731 Processing file lib/util/zipf.c 00:07:34.731 Processing file lib/util/crc16.c 00:07:34.731 Processing file lib/util/dif.c 00:07:34.731 Processing file lib/util/fd.c 00:07:34.731 Processing file lib/util/file.c 00:07:34.731 Processing file lib/util/cpuset.c 00:07:34.731 Processing file lib/util/xor.c 00:07:34.731 Processing file lib/util/hexlify.c 00:07:34.731 Processing file lib/util/iov.c 00:07:34.731 Processing file lib/util/math.c 00:07:34.731 Processing file lib/util/bit_array.c 00:07:34.731 Processing file lib/util/uuid.c 00:07:34.731 Processing file lib/util/fd_group.c 00:07:34.731 Processing file lib/util/crc32_ieee.c 00:07:34.731 Processing file lib/util/crc32.c 00:07:34.731 Processing file lib/util/base64.c 00:07:34.731 Processing file lib/util/crc32c.c 00:07:34.731 Processing file lib/util/crc64.c 00:07:34.731 Processing file lib/util/strerror_tls.c 00:07:34.731 Processing file lib/util/pipe.c 00:07:34.731 Processing file lib/util/string.c 00:07:34.731 Processing file lib/vfio_user/host/vfio_user.c 00:07:34.731 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:34.731 Processing file lib/vhost/vhost_blk.c 00:07:34.731 Processing file lib/vhost/vhost_scsi.c 00:07:34.731 Processing file lib/vhost/vhost.c 00:07:34.731 Processing file lib/vhost/rte_vhost_user.c 00:07:34.731 Processing file lib/vhost/vhost_internal.h 00:07:34.731 Processing file lib/vhost/vhost_rpc.c 00:07:35.015 Processing file lib/virtio/virtio_pci.c 00:07:35.015 Processing file lib/virtio/virtio.c 00:07:35.015 Processing file lib/virtio/virtio_vfio_user.c 00:07:35.015 Processing file lib/virtio/virtio_vhost_user.c 00:07:35.015 Processing file lib/vmd/led.c 00:07:35.015 Processing file lib/vmd/vmd.c 00:07:35.274 Processing file module/accel/dsa/accel_dsa.c 00:07:35.274 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:35.274 Processing file module/accel/error/accel_error.c 00:07:35.274 Processing file module/accel/error/accel_error_rpc.c 00:07:35.274 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:35.274 Processing file module/accel/iaa/accel_iaa.c 00:07:35.274 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:35.274 Processing file module/accel/ioat/accel_ioat.c 00:07:35.531 Processing file module/bdev/aio/bdev_aio.c 00:07:35.531 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:35.531 Processing file module/bdev/delay/vbdev_delay.c 00:07:35.531 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:35.789 Processing file module/bdev/error/vbdev_error.c 00:07:35.789 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:35.789 Processing file module/bdev/ftl/bdev_ftl.c 00:07:35.790 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:35.790 Processing file module/bdev/gpt/gpt.c 00:07:35.790 Processing file module/bdev/gpt/gpt.h 00:07:35.790 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:36.048 Processing file module/bdev/iscsi/bdev_iscsi.c 00:07:36.048 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:07:36.048 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:36.048 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:36.306 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:36.306 Processing file module/bdev/malloc/bdev_malloc.c 00:07:36.306 Processing file module/bdev/null/bdev_null.c 00:07:36.306 Processing file module/bdev/null/bdev_null_rpc.c 00:07:36.565 Processing file module/bdev/nvme/nvme_rpc.c 00:07:36.565 Processing file module/bdev/nvme/vbdev_opal.c 00:07:36.565 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:36.565 Processing file module/bdev/nvme/bdev_nvme.c 00:07:36.565 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:36.565 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:36.565 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:36.823 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:36.823 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:37.081 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:37.081 Processing file module/bdev/raid/raid1.c 00:07:37.081 Processing file module/bdev/raid/bdev_raid.h 00:07:37.081 Processing file module/bdev/raid/raid0.c 00:07:37.081 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:37.081 Processing file module/bdev/raid/concat.c 00:07:37.081 Processing file module/bdev/raid/bdev_raid.c 00:07:37.081 Processing file module/bdev/raid/raid5f.c 00:07:37.081 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:37.081 Processing file module/bdev/split/vbdev_split.c 00:07:37.081 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:37.081 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:37.081 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:37.339 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:37.339 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:37.339 Processing file module/blob/bdev/blob_bdev.c 00:07:37.339 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:37.339 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:37.598 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:37.598 Processing file module/event/subsystems/accel/accel.c 00:07:37.598 Processing file module/event/subsystems/bdev/bdev.c 00:07:37.857 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:37.857 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:37.857 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:37.857 Processing file module/event/subsystems/nbd/nbd.c 00:07:37.857 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:37.857 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:38.115 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:38.115 Processing file module/event/subsystems/scsi/scsi.c 00:07:38.115 Processing file module/event/subsystems/sock/sock.c 00:07:38.374 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:38.374 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:38.374 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:38.374 Processing file module/event/subsystems/vmd/vmd.c 00:07:38.374 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:38.632 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:38.632 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:38.632 Processing file module/sock/sock_kernel.h 00:07:38.891 Processing file module/sock/posix/posix.c 00:07:38.891 Writing directory view page. 00:07:38.891 Overall coverage rate: 00:07:38.891 lines......: 39.1% (39241 of 100366 lines) 00:07:38.891 functions..: 42.8% (3585 of 8382 functions) 00:07:38.891 00:07:38.891 00:07:38.891 ===================== 00:07:38.891 All unit tests passed 00:07:38.891 ===================== 00:07:38.891 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:38.891 12:52:57 -- unit/unittest.sh@302 -- # set +x 00:07:38.891 00:07:38.891 00:07:38.891 ************************************ 00:07:38.891 END TEST unittest 00:07:38.891 ************************************ 00:07:38.891 00:07:38.891 real 3m8.048s 00:07:38.891 user 2m43.323s 00:07:38.891 sys 0m13.737s 00:07:38.891 12:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.891 12:52:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.891 12:52:57 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:07:38.891 12:52:57 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:38.891 12:52:57 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:38.891 12:52:57 -- spdk/autotest.sh@173 -- # timing_enter lib 00:07:38.891 12:52:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:38.891 12:52:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.891 12:52:57 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:38.891 12:52:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.891 12:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.891 12:52:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.891 ************************************ 00:07:38.891 START TEST env 00:07:38.891 ************************************ 00:07:38.891 12:52:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:38.891 * Looking for test storage... 00:07:38.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:38.891 12:52:57 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:38.891 12:52:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.891 12:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.891 12:52:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.891 ************************************ 00:07:38.891 START TEST env_memory 00:07:38.891 ************************************ 00:07:38.891 12:52:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:38.891 00:07:38.891 00:07:38.891 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.891 http://cunit.sourceforge.net/ 00:07:38.891 00:07:38.891 00:07:38.891 Suite: memory 00:07:38.891 Test: alloc and free memory map ...[2024-06-11 12:52:57.699695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:39.150 passed 00:07:39.150 Test: mem map translation ...[2024-06-11 12:52:57.779653] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:39.150 [2024-06-11 12:52:57.779929] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:39.150 [2024-06-11 12:52:57.780080] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:39.151 [2024-06-11 12:52:57.780189] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:39.151 passed 00:07:39.151 Test: mem map registration ...[2024-06-11 12:52:57.868181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:39.151 [2024-06-11 12:52:57.868397] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:39.151 passed 00:07:39.151 Test: mem map adjacent registrations ...passed 00:07:39.151 00:07:39.151 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.151 suites 1 1 n/a 0 0 00:07:39.151 tests 4 4 4 0 0 00:07:39.151 asserts 152 152 152 0 n/a 00:07:39.151 00:07:39.151 Elapsed time = 0.330 seconds 00:07:39.151 00:07:39.151 real 0m0.363s 00:07:39.151 user 0m0.336s 00:07:39.151 sys 0m0.026s 00:07:39.151 12:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.151 12:52:57 -- common/autotest_common.sh@10 -- # set +x 00:07:39.151 ************************************ 00:07:39.151 END TEST env_memory 00:07:39.151 ************************************ 00:07:39.410 12:52:58 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:39.410 12:52:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:39.410 12:52:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.410 12:52:58 -- common/autotest_common.sh@10 -- # set +x 00:07:39.410 ************************************ 00:07:39.410 START TEST env_vtophys 00:07:39.410 ************************************ 00:07:39.410 12:52:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:39.410 EAL: lib.eal log level changed from notice to debug 00:07:39.410 EAL: Detected lcore 0 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 1 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 2 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 3 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 4 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 5 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 6 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 7 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 8 as core 0 on socket 0 00:07:39.410 EAL: Detected lcore 9 as core 0 on socket 0 00:07:39.410 EAL: Maximum logical cores by configuration: 128 00:07:39.410 EAL: Detected CPU lcores: 10 00:07:39.410 EAL: Detected NUMA nodes: 1 00:07:39.410 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:39.410 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:39.410 EAL: Checking presence of .so 'librte_eal.so' 00:07:39.410 EAL: Detected static linkage of DPDK 00:07:39.410 EAL: No shared files mode enabled, IPC will be disabled 00:07:39.410 EAL: Selected IOVA mode 'PA' 00:07:39.410 EAL: Probing VFIO support... 00:07:39.410 EAL: IOMMU type 1 (Type 1) is supported 00:07:39.410 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:39.410 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:39.410 EAL: VFIO support initialized 00:07:39.410 EAL: Ask a virtual area of 0x2e000 bytes 00:07:39.410 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:39.410 EAL: Setting up physically contiguous memory... 00:07:39.410 EAL: Setting maximum number of open files to 1048576 00:07:39.410 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:39.410 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:39.410 EAL: Ask a virtual area of 0x61000 bytes 00:07:39.410 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:39.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:39.410 EAL: Ask a virtual area of 0x400000000 bytes 00:07:39.410 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:39.410 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:39.410 EAL: Ask a virtual area of 0x61000 bytes 00:07:39.410 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:39.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:39.410 EAL: Ask a virtual area of 0x400000000 bytes 00:07:39.410 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:39.410 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:39.410 EAL: Ask a virtual area of 0x61000 bytes 00:07:39.410 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:39.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:39.410 EAL: Ask a virtual area of 0x400000000 bytes 00:07:39.410 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:39.410 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:39.410 EAL: Ask a virtual area of 0x61000 bytes 00:07:39.410 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:39.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:39.410 EAL: Ask a virtual area of 0x400000000 bytes 00:07:39.410 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:39.410 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:39.410 EAL: Hugepages will be freed exactly as allocated. 00:07:39.410 EAL: No shared files mode enabled, IPC is disabled 00:07:39.410 EAL: No shared files mode enabled, IPC is disabled 00:07:39.410 EAL: TSC frequency is ~2200000 KHz 00:07:39.410 EAL: Main lcore 0 is ready (tid=7f1bb8e15a40;cpuset=[0]) 00:07:39.410 EAL: Trying to obtain current memory policy. 00:07:39.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.410 EAL: Restoring previous memory policy: 0 00:07:39.410 EAL: request: mp_malloc_sync 00:07:39.410 EAL: No shared files mode enabled, IPC is disabled 00:07:39.410 EAL: Heap on socket 0 was expanded by 2MB 00:07:39.410 EAL: No shared files mode enabled, IPC is disabled 00:07:39.410 EAL: Mem event callback 'spdk:(nil)' registered 00:07:39.669 00:07:39.669 00:07:39.669 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.669 http://cunit.sourceforge.net/ 00:07:39.669 00:07:39.669 00:07:39.669 Suite: components_suite 00:07:39.928 Test: vtophys_malloc_test ...passed 00:07:39.928 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:39.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.928 EAL: Restoring previous memory policy: 0 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was expanded by 4MB 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was shrunk by 4MB 00:07:39.928 EAL: Trying to obtain current memory policy. 00:07:39.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.928 EAL: Restoring previous memory policy: 0 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was expanded by 6MB 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was shrunk by 6MB 00:07:39.928 EAL: Trying to obtain current memory policy. 00:07:39.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.928 EAL: Restoring previous memory policy: 0 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was expanded by 10MB 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was shrunk by 10MB 00:07:39.928 EAL: Trying to obtain current memory policy. 00:07:39.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.928 EAL: Restoring previous memory policy: 0 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was expanded by 18MB 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was shrunk by 18MB 00:07:39.928 EAL: Trying to obtain current memory policy. 00:07:39.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.928 EAL: Restoring previous memory policy: 0 00:07:39.928 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.928 EAL: request: mp_malloc_sync 00:07:39.928 EAL: No shared files mode enabled, IPC is disabled 00:07:39.928 EAL: Heap on socket 0 was expanded by 34MB 00:07:40.187 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.187 EAL: request: mp_malloc_sync 00:07:40.187 EAL: No shared files mode enabled, IPC is disabled 00:07:40.187 EAL: Heap on socket 0 was shrunk by 34MB 00:07:40.187 EAL: Trying to obtain current memory policy. 00:07:40.187 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.187 EAL: Restoring previous memory policy: 0 00:07:40.187 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.187 EAL: request: mp_malloc_sync 00:07:40.187 EAL: No shared files mode enabled, IPC is disabled 00:07:40.187 EAL: Heap on socket 0 was expanded by 66MB 00:07:40.187 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.187 EAL: request: mp_malloc_sync 00:07:40.187 EAL: No shared files mode enabled, IPC is disabled 00:07:40.187 EAL: Heap on socket 0 was shrunk by 66MB 00:07:40.445 EAL: Trying to obtain current memory policy. 00:07:40.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.445 EAL: Restoring previous memory policy: 0 00:07:40.445 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.445 EAL: request: mp_malloc_sync 00:07:40.445 EAL: No shared files mode enabled, IPC is disabled 00:07:40.445 EAL: Heap on socket 0 was expanded by 130MB 00:07:40.445 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.703 EAL: request: mp_malloc_sync 00:07:40.703 EAL: No shared files mode enabled, IPC is disabled 00:07:40.703 EAL: Heap on socket 0 was shrunk by 130MB 00:07:40.703 EAL: Trying to obtain current memory policy. 00:07:40.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.703 EAL: Restoring previous memory policy: 0 00:07:40.703 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.703 EAL: request: mp_malloc_sync 00:07:40.703 EAL: No shared files mode enabled, IPC is disabled 00:07:40.703 EAL: Heap on socket 0 was expanded by 258MB 00:07:41.270 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.270 EAL: request: mp_malloc_sync 00:07:41.270 EAL: No shared files mode enabled, IPC is disabled 00:07:41.270 EAL: Heap on socket 0 was shrunk by 258MB 00:07:41.528 EAL: Trying to obtain current memory policy. 00:07:41.528 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:41.786 EAL: Restoring previous memory policy: 0 00:07:41.786 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.786 EAL: request: mp_malloc_sync 00:07:41.786 EAL: No shared files mode enabled, IPC is disabled 00:07:41.786 EAL: Heap on socket 0 was expanded by 514MB 00:07:42.355 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.613 EAL: request: mp_malloc_sync 00:07:42.613 EAL: No shared files mode enabled, IPC is disabled 00:07:42.613 EAL: Heap on socket 0 was shrunk by 514MB 00:07:43.177 EAL: Trying to obtain current memory policy. 00:07:43.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.454 EAL: Restoring previous memory policy: 0 00:07:43.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.454 EAL: request: mp_malloc_sync 00:07:43.454 EAL: No shared files mode enabled, IPC is disabled 00:07:43.454 EAL: Heap on socket 0 was expanded by 1026MB 00:07:44.853 EAL: Calling mem event callback 'spdk:(nil)' 00:07:45.111 EAL: request: mp_malloc_sync 00:07:45.111 EAL: No shared files mode enabled, IPC is disabled 00:07:45.111 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:46.487 passed 00:07:46.487 00:07:46.487 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.487 suites 1 1 n/a 0 0 00:07:46.487 tests 2 2 2 0 0 00:07:46.487 asserts 6545 6545 6545 0 n/a 00:07:46.487 00:07:46.487 Elapsed time = 6.856 seconds 00:07:46.487 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.487 EAL: request: mp_malloc_sync 00:07:46.487 EAL: No shared files mode enabled, IPC is disabled 00:07:46.487 EAL: Heap on socket 0 was shrunk by 2MB 00:07:46.487 EAL: No shared files mode enabled, IPC is disabled 00:07:46.487 EAL: No shared files mode enabled, IPC is disabled 00:07:46.487 EAL: No shared files mode enabled, IPC is disabled 00:07:46.487 00:07:46.487 real 0m7.145s 00:07:46.487 user 0m6.049s 00:07:46.487 sys 0m0.965s 00:07:46.487 12:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.487 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.487 ************************************ 00:07:46.487 END TEST env_vtophys 00:07:46.487 ************************************ 00:07:46.487 12:53:05 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:46.487 12:53:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.487 12:53:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.487 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.487 ************************************ 00:07:46.487 START TEST env_pci 00:07:46.487 ************************************ 00:07:46.487 12:53:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:46.487 00:07:46.487 00:07:46.487 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.487 http://cunit.sourceforge.net/ 00:07:46.487 00:07:46.487 00:07:46.487 Suite: pci 00:07:46.487 Test: pci_hook ...[2024-06-11 12:53:05.240259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 104867 has claimed it 00:07:46.487 EAL: Cannot find device (10000:00:01.0) 00:07:46.487 EAL: Failed to attach device on primary process 00:07:46.487 passed 00:07:46.487 00:07:46.487 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.487 suites 1 1 n/a 0 0 00:07:46.487 tests 1 1 1 0 0 00:07:46.487 asserts 25 25 25 0 n/a 00:07:46.487 00:07:46.487 Elapsed time = 0.005 seconds 00:07:46.487 00:07:46.487 real 0m0.074s 00:07:46.487 user 0m0.038s 00:07:46.487 sys 0m0.033s 00:07:46.487 12:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.487 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.487 ************************************ 00:07:46.487 END TEST env_pci 00:07:46.487 ************************************ 00:07:46.487 12:53:05 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:46.487 12:53:05 -- env/env.sh@15 -- # uname 00:07:46.487 12:53:05 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:46.487 12:53:05 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:46.487 12:53:05 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:46.487 12:53:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:46.487 12:53:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.487 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.745 ************************************ 00:07:46.745 START TEST env_dpdk_post_init 00:07:46.745 ************************************ 00:07:46.745 12:53:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:46.745 EAL: Detected CPU lcores: 10 00:07:46.745 EAL: Detected NUMA nodes: 1 00:07:46.745 EAL: Detected static linkage of DPDK 00:07:46.745 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:46.745 EAL: Selected IOVA mode 'PA' 00:07:46.745 EAL: VFIO support initialized 00:07:46.745 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:46.745 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:07:46.745 Starting DPDK initialization... 00:07:46.745 Starting SPDK post initialization... 00:07:46.745 SPDK NVMe probe 00:07:46.745 Attaching to 0000:00:06.0 00:07:46.745 Attached to 0000:00:06.0 00:07:46.745 Cleaning up... 00:07:46.745 00:07:46.745 real 0m0.250s 00:07:46.745 user 0m0.073s 00:07:46.745 sys 0m0.077s 00:07:46.745 12:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.745 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.745 ************************************ 00:07:46.745 END TEST env_dpdk_post_init 00:07:46.745 ************************************ 00:07:47.002 12:53:05 -- env/env.sh@26 -- # uname 00:07:47.002 12:53:05 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:47.002 12:53:05 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:47.002 12:53:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:47.002 12:53:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.002 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:47.002 ************************************ 00:07:47.002 START TEST env_mem_callbacks 00:07:47.002 ************************************ 00:07:47.002 12:53:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:47.002 EAL: Detected CPU lcores: 10 00:07:47.002 EAL: Detected NUMA nodes: 1 00:07:47.002 EAL: Detected static linkage of DPDK 00:07:47.002 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:47.002 EAL: Selected IOVA mode 'PA' 00:07:47.002 EAL: VFIO support initialized 00:07:47.002 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:47.002 00:07:47.002 00:07:47.002 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.002 http://cunit.sourceforge.net/ 00:07:47.002 00:07:47.002 00:07:47.002 Suite: memory 00:07:47.002 Test: test ... 00:07:47.002 register 0x200000200000 2097152 00:07:47.002 malloc 3145728 00:07:47.002 register 0x200000400000 4194304 00:07:47.002 buf 0x2000004fffc0 len 3145728 PASSED 00:07:47.002 malloc 64 00:07:47.002 buf 0x2000004ffec0 len 64 PASSED 00:07:47.002 malloc 4194304 00:07:47.002 register 0x200000800000 6291456 00:07:47.002 buf 0x2000009fffc0 len 4194304 PASSED 00:07:47.002 free 0x2000004fffc0 3145728 00:07:47.002 free 0x2000004ffec0 64 00:07:47.002 unregister 0x200000400000 4194304 PASSED 00:07:47.002 free 0x2000009fffc0 4194304 00:07:47.002 unregister 0x200000800000 6291456 PASSED 00:07:47.002 malloc 8388608 00:07:47.002 register 0x200000400000 10485760 00:07:47.261 buf 0x2000005fffc0 len 8388608 PASSED 00:07:47.261 free 0x2000005fffc0 8388608 00:07:47.261 unregister 0x200000400000 10485760 PASSED 00:07:47.261 passed 00:07:47.261 00:07:47.261 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.261 suites 1 1 n/a 0 0 00:07:47.261 tests 1 1 1 0 0 00:07:47.261 asserts 15 15 15 0 n/a 00:07:47.261 00:07:47.261 Elapsed time = 0.050 seconds 00:07:47.261 ************************************ 00:07:47.261 END TEST env_mem_callbacks 00:07:47.261 ************************************ 00:07:47.261 00:07:47.261 real 0m0.267s 00:07:47.261 user 0m0.110s 00:07:47.261 sys 0m0.054s 00:07:47.261 12:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.261 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:47.261 00:07:47.261 real 0m8.380s 00:07:47.261 user 0m6.775s 00:07:47.261 sys 0m1.252s 00:07:47.261 12:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.261 ************************************ 00:07:47.261 END TEST env 00:07:47.261 ************************************ 00:07:47.261 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:47.261 12:53:05 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:47.261 12:53:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:47.261 12:53:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.261 12:53:05 -- common/autotest_common.sh@10 -- # set +x 00:07:47.261 ************************************ 00:07:47.261 START TEST rpc 00:07:47.261 ************************************ 00:07:47.261 12:53:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:47.261 * Looking for test storage... 00:07:47.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:47.261 12:53:06 -- rpc/rpc.sh@65 -- # spdk_pid=104997 00:07:47.261 12:53:06 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:47.261 12:53:06 -- rpc/rpc.sh@67 -- # waitforlisten 104997 00:07:47.261 12:53:06 -- common/autotest_common.sh@819 -- # '[' -z 104997 ']' 00:07:47.261 12:53:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.261 12:53:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:47.261 12:53:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.261 12:53:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:47.261 12:53:06 -- common/autotest_common.sh@10 -- # set +x 00:07:47.261 12:53:06 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:47.520 [2024-06-11 12:53:06.102593] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:47.520 [2024-06-11 12:53:06.102958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104997 ] 00:07:47.520 [2024-06-11 12:53:06.258964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.777 [2024-06-11 12:53:06.436472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.777 [2024-06-11 12:53:06.436958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:47.777 [2024-06-11 12:53:06.437103] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104997' to capture a snapshot of events at runtime. 00:07:47.777 [2024-06-11 12:53:06.437242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104997 for offline analysis/debug. 00:07:47.777 [2024-06-11 12:53:06.437358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.153 12:53:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:49.153 12:53:07 -- common/autotest_common.sh@852 -- # return 0 00:07:49.153 12:53:07 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:49.153 12:53:07 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:49.153 12:53:07 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:49.153 12:53:07 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:49.153 12:53:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.153 12:53:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.153 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.153 ************************************ 00:07:49.153 START TEST rpc_integrity 00:07:49.153 ************************************ 00:07:49.153 12:53:07 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:49.153 12:53:07 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:49.153 12:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.153 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.153 12:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.153 12:53:07 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:49.153 12:53:07 -- rpc/rpc.sh@13 -- # jq length 00:07:49.153 12:53:07 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:49.153 12:53:07 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:49.153 12:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.153 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.153 12:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.153 12:53:07 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:49.153 12:53:07 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:49.153 12:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.153 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.153 12:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.153 12:53:07 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:49.153 { 00:07:49.153 "name": "Malloc0", 00:07:49.153 "aliases": [ 00:07:49.153 "ed32c120-a27a-4b22-a9ac-bdd24e5e67de" 00:07:49.153 ], 00:07:49.153 "product_name": "Malloc disk", 00:07:49.153 "block_size": 512, 00:07:49.153 "num_blocks": 16384, 00:07:49.153 "uuid": "ed32c120-a27a-4b22-a9ac-bdd24e5e67de", 00:07:49.153 "assigned_rate_limits": { 00:07:49.153 "rw_ios_per_sec": 0, 00:07:49.153 "rw_mbytes_per_sec": 0, 00:07:49.154 "r_mbytes_per_sec": 0, 00:07:49.154 "w_mbytes_per_sec": 0 00:07:49.154 }, 00:07:49.154 "claimed": false, 00:07:49.154 "zoned": false, 00:07:49.154 "supported_io_types": { 00:07:49.154 "read": true, 00:07:49.154 "write": true, 00:07:49.154 "unmap": true, 00:07:49.154 "write_zeroes": true, 00:07:49.154 "flush": true, 00:07:49.154 "reset": true, 00:07:49.154 "compare": false, 00:07:49.154 "compare_and_write": false, 00:07:49.154 "abort": true, 00:07:49.154 "nvme_admin": false, 00:07:49.154 "nvme_io": false 00:07:49.154 }, 00:07:49.154 "memory_domains": [ 00:07:49.154 { 00:07:49.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.154 "dma_device_type": 2 00:07:49.154 } 00:07:49.154 ], 00:07:49.154 "driver_specific": {} 00:07:49.154 } 00:07:49.154 ]' 00:07:49.154 12:53:07 -- rpc/rpc.sh@17 -- # jq length 00:07:49.154 12:53:07 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:49.154 12:53:07 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:49.154 12:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.154 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 [2024-06-11 12:53:07.883129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:49.154 [2024-06-11 12:53:07.883357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.154 [2024-06-11 12:53:07.883435] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:49.154 [2024-06-11 12:53:07.883595] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.154 [2024-06-11 12:53:07.886042] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.154 [2024-06-11 12:53:07.886237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:49.154 Passthru0 00:07:49.154 12:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.154 12:53:07 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:49.154 12:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.154 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 12:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.154 12:53:07 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:49.154 { 00:07:49.154 "name": "Malloc0", 00:07:49.154 "aliases": [ 00:07:49.154 "ed32c120-a27a-4b22-a9ac-bdd24e5e67de" 00:07:49.154 ], 00:07:49.154 "product_name": "Malloc disk", 00:07:49.154 "block_size": 512, 00:07:49.154 "num_blocks": 16384, 00:07:49.154 "uuid": "ed32c120-a27a-4b22-a9ac-bdd24e5e67de", 00:07:49.154 "assigned_rate_limits": { 00:07:49.154 "rw_ios_per_sec": 0, 00:07:49.154 "rw_mbytes_per_sec": 0, 00:07:49.154 "r_mbytes_per_sec": 0, 00:07:49.154 "w_mbytes_per_sec": 0 00:07:49.154 }, 00:07:49.154 "claimed": true, 00:07:49.154 "claim_type": "exclusive_write", 00:07:49.154 "zoned": false, 00:07:49.154 "supported_io_types": { 00:07:49.154 "read": true, 00:07:49.154 "write": true, 00:07:49.154 "unmap": true, 00:07:49.154 "write_zeroes": true, 00:07:49.154 "flush": true, 00:07:49.154 "reset": true, 00:07:49.154 "compare": false, 00:07:49.154 "compare_and_write": false, 00:07:49.154 "abort": true, 00:07:49.154 "nvme_admin": false, 00:07:49.154 "nvme_io": false 00:07:49.154 }, 00:07:49.154 "memory_domains": [ 00:07:49.154 { 00:07:49.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.154 "dma_device_type": 2 00:07:49.154 } 00:07:49.154 ], 00:07:49.154 "driver_specific": {} 00:07:49.154 }, 00:07:49.154 { 00:07:49.154 "name": "Passthru0", 00:07:49.154 "aliases": [ 00:07:49.154 "004ddf15-ca35-5166-8350-66d411029ef2" 00:07:49.154 ], 00:07:49.154 "product_name": "passthru", 00:07:49.154 "block_size": 512, 00:07:49.154 "num_blocks": 16384, 00:07:49.154 "uuid": "004ddf15-ca35-5166-8350-66d411029ef2", 00:07:49.154 "assigned_rate_limits": { 00:07:49.154 "rw_ios_per_sec": 0, 00:07:49.154 "rw_mbytes_per_sec": 0, 00:07:49.154 "r_mbytes_per_sec": 0, 00:07:49.154 "w_mbytes_per_sec": 0 00:07:49.154 }, 00:07:49.154 "claimed": false, 00:07:49.154 "zoned": false, 00:07:49.154 "supported_io_types": { 00:07:49.154 "read": true, 00:07:49.154 "write": true, 00:07:49.154 "unmap": true, 00:07:49.154 "write_zeroes": true, 00:07:49.154 "flush": true, 00:07:49.154 "reset": true, 00:07:49.154 "compare": false, 00:07:49.154 "compare_and_write": false, 00:07:49.154 "abort": true, 00:07:49.154 "nvme_admin": false, 00:07:49.154 "nvme_io": false 00:07:49.154 }, 00:07:49.154 "memory_domains": [ 00:07:49.154 { 00:07:49.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.154 "dma_device_type": 2 00:07:49.154 } 00:07:49.154 ], 00:07:49.154 "driver_specific": { 00:07:49.154 "passthru": { 00:07:49.154 "name": "Passthru0", 00:07:49.154 "base_bdev_name": "Malloc0" 00:07:49.154 } 00:07:49.154 } 00:07:49.154 } 00:07:49.154 ]' 00:07:49.154 12:53:07 -- rpc/rpc.sh@21 -- # jq length 00:07:49.154 12:53:07 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:49.154 12:53:07 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:49.154 12:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.154 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 12:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.154 12:53:07 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:49.154 12:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.154 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 12:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.154 12:53:07 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:49.154 12:53:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.154 12:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 12:53:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.154 12:53:07 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:49.154 12:53:07 -- rpc/rpc.sh@26 -- # jq length 00:07:49.413 12:53:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:49.413 00:07:49.413 real 0m0.302s 00:07:49.413 user 0m0.195s 00:07:49.413 sys 0m0.025s 00:07:49.413 12:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.413 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.413 ************************************ 00:07:49.413 END TEST rpc_integrity 00:07:49.413 ************************************ 00:07:49.413 12:53:08 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:49.413 12:53:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.413 12:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.413 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.413 ************************************ 00:07:49.413 START TEST rpc_plugins 00:07:49.413 ************************************ 00:07:49.413 12:53:08 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:07:49.413 12:53:08 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:49.413 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.413 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.413 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.413 12:53:08 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:49.413 12:53:08 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:49.413 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.413 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.413 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.413 12:53:08 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:49.413 { 00:07:49.413 "name": "Malloc1", 00:07:49.413 "aliases": [ 00:07:49.413 "90f1492a-9316-484c-8603-dcf63499019f" 00:07:49.413 ], 00:07:49.413 "product_name": "Malloc disk", 00:07:49.413 "block_size": 4096, 00:07:49.413 "num_blocks": 256, 00:07:49.413 "uuid": "90f1492a-9316-484c-8603-dcf63499019f", 00:07:49.413 "assigned_rate_limits": { 00:07:49.413 "rw_ios_per_sec": 0, 00:07:49.413 "rw_mbytes_per_sec": 0, 00:07:49.413 "r_mbytes_per_sec": 0, 00:07:49.413 "w_mbytes_per_sec": 0 00:07:49.413 }, 00:07:49.413 "claimed": false, 00:07:49.413 "zoned": false, 00:07:49.413 "supported_io_types": { 00:07:49.413 "read": true, 00:07:49.413 "write": true, 00:07:49.413 "unmap": true, 00:07:49.413 "write_zeroes": true, 00:07:49.413 "flush": true, 00:07:49.413 "reset": true, 00:07:49.413 "compare": false, 00:07:49.413 "compare_and_write": false, 00:07:49.413 "abort": true, 00:07:49.413 "nvme_admin": false, 00:07:49.413 "nvme_io": false 00:07:49.413 }, 00:07:49.413 "memory_domains": [ 00:07:49.413 { 00:07:49.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.413 "dma_device_type": 2 00:07:49.413 } 00:07:49.413 ], 00:07:49.413 "driver_specific": {} 00:07:49.413 } 00:07:49.413 ]' 00:07:49.413 12:53:08 -- rpc/rpc.sh@32 -- # jq length 00:07:49.413 12:53:08 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:49.414 12:53:08 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:49.414 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.414 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.414 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.414 12:53:08 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:49.414 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.414 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.414 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.414 12:53:08 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:49.414 12:53:08 -- rpc/rpc.sh@36 -- # jq length 00:07:49.414 12:53:08 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:49.414 00:07:49.414 real 0m0.159s 00:07:49.414 user 0m0.113s 00:07:49.414 sys 0m0.013s 00:07:49.414 ************************************ 00:07:49.414 END TEST rpc_plugins 00:07:49.414 ************************************ 00:07:49.414 12:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.414 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.672 12:53:08 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:49.672 12:53:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.672 12:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.672 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.672 ************************************ 00:07:49.672 START TEST rpc_trace_cmd_test 00:07:49.672 ************************************ 00:07:49.672 12:53:08 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:07:49.672 12:53:08 -- rpc/rpc.sh@40 -- # local info 00:07:49.672 12:53:08 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:49.672 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.672 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.672 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.672 12:53:08 -- rpc/rpc.sh@42 -- # info='{ 00:07:49.672 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104997", 00:07:49.672 "tpoint_group_mask": "0x8", 00:07:49.672 "iscsi_conn": { 00:07:49.672 "mask": "0x2", 00:07:49.672 "tpoint_mask": "0x0" 00:07:49.672 }, 00:07:49.672 "scsi": { 00:07:49.672 "mask": "0x4", 00:07:49.672 "tpoint_mask": "0x0" 00:07:49.672 }, 00:07:49.672 "bdev": { 00:07:49.672 "mask": "0x8", 00:07:49.672 "tpoint_mask": "0xffffffffffffffff" 00:07:49.672 }, 00:07:49.672 "nvmf_rdma": { 00:07:49.672 "mask": "0x10", 00:07:49.672 "tpoint_mask": "0x0" 00:07:49.672 }, 00:07:49.672 "nvmf_tcp": { 00:07:49.672 "mask": "0x20", 00:07:49.672 "tpoint_mask": "0x0" 00:07:49.672 }, 00:07:49.672 "ftl": { 00:07:49.672 "mask": "0x40", 00:07:49.672 "tpoint_mask": "0x0" 00:07:49.673 }, 00:07:49.673 "blobfs": { 00:07:49.673 "mask": "0x80", 00:07:49.673 "tpoint_mask": "0x0" 00:07:49.673 }, 00:07:49.673 "dsa": { 00:07:49.673 "mask": "0x200", 00:07:49.673 "tpoint_mask": "0x0" 00:07:49.673 }, 00:07:49.673 "thread": { 00:07:49.673 "mask": "0x400", 00:07:49.673 "tpoint_mask": "0x0" 00:07:49.673 }, 00:07:49.673 "nvme_pcie": { 00:07:49.673 "mask": "0x800", 00:07:49.673 "tpoint_mask": "0x0" 00:07:49.673 }, 00:07:49.673 "iaa": { 00:07:49.673 "mask": "0x1000", 00:07:49.673 "tpoint_mask": "0x0" 00:07:49.673 }, 00:07:49.673 "nvme_tcp": { 00:07:49.673 "mask": "0x2000", 00:07:49.673 "tpoint_mask": "0x0" 00:07:49.673 }, 00:07:49.673 "bdev_nvme": { 00:07:49.673 "mask": "0x4000", 00:07:49.673 "tpoint_mask": "0x0" 00:07:49.673 } 00:07:49.673 }' 00:07:49.673 12:53:08 -- rpc/rpc.sh@43 -- # jq length 00:07:49.673 12:53:08 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:49.673 12:53:08 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:49.673 12:53:08 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:49.673 12:53:08 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:49.673 12:53:08 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:49.673 12:53:08 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:49.931 12:53:08 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:49.931 12:53:08 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:49.931 12:53:08 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:49.931 ************************************ 00:07:49.931 END TEST rpc_trace_cmd_test 00:07:49.931 ************************************ 00:07:49.931 00:07:49.931 real 0m0.303s 00:07:49.931 user 0m0.281s 00:07:49.931 sys 0m0.013s 00:07:49.931 12:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.931 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.931 12:53:08 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:49.931 12:53:08 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:49.931 12:53:08 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:49.931 12:53:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.931 12:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.931 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.931 ************************************ 00:07:49.931 START TEST rpc_daemon_integrity 00:07:49.931 ************************************ 00:07:49.931 12:53:08 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:49.932 12:53:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:49.932 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.932 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.932 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.932 12:53:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:49.932 12:53:08 -- rpc/rpc.sh@13 -- # jq length 00:07:49.932 12:53:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:49.932 12:53:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:49.932 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.932 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.932 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.932 12:53:08 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:49.932 12:53:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:49.932 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.932 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.932 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.932 12:53:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:49.932 { 00:07:49.932 "name": "Malloc2", 00:07:49.932 "aliases": [ 00:07:49.932 "18c66404-4211-4389-9ccc-9ae1f321d15b" 00:07:49.932 ], 00:07:49.932 "product_name": "Malloc disk", 00:07:49.932 "block_size": 512, 00:07:49.932 "num_blocks": 16384, 00:07:49.932 "uuid": "18c66404-4211-4389-9ccc-9ae1f321d15b", 00:07:49.932 "assigned_rate_limits": { 00:07:49.932 "rw_ios_per_sec": 0, 00:07:49.932 "rw_mbytes_per_sec": 0, 00:07:49.932 "r_mbytes_per_sec": 0, 00:07:49.932 "w_mbytes_per_sec": 0 00:07:49.932 }, 00:07:49.932 "claimed": false, 00:07:49.932 "zoned": false, 00:07:49.932 "supported_io_types": { 00:07:49.932 "read": true, 00:07:49.932 "write": true, 00:07:49.932 "unmap": true, 00:07:49.932 "write_zeroes": true, 00:07:49.932 "flush": true, 00:07:49.932 "reset": true, 00:07:49.932 "compare": false, 00:07:49.932 "compare_and_write": false, 00:07:49.932 "abort": true, 00:07:49.932 "nvme_admin": false, 00:07:49.932 "nvme_io": false 00:07:49.932 }, 00:07:49.932 "memory_domains": [ 00:07:49.932 { 00:07:49.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.932 "dma_device_type": 2 00:07:49.932 } 00:07:49.932 ], 00:07:49.932 "driver_specific": {} 00:07:49.932 } 00:07:49.932 ]' 00:07:49.932 12:53:08 -- rpc/rpc.sh@17 -- # jq length 00:07:50.190 12:53:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:50.190 12:53:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:50.190 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.190 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.190 [2024-06-11 12:53:08.798511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:50.190 [2024-06-11 12:53:08.798706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.190 [2024-06-11 12:53:08.798788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:50.190 [2024-06-11 12:53:08.799012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.190 [2024-06-11 12:53:08.801484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.190 [2024-06-11 12:53:08.801672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:50.190 Passthru0 00:07:50.190 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.190 12:53:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:50.191 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.191 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.191 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.191 12:53:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:50.191 { 00:07:50.191 "name": "Malloc2", 00:07:50.191 "aliases": [ 00:07:50.191 "18c66404-4211-4389-9ccc-9ae1f321d15b" 00:07:50.191 ], 00:07:50.191 "product_name": "Malloc disk", 00:07:50.191 "block_size": 512, 00:07:50.191 "num_blocks": 16384, 00:07:50.191 "uuid": "18c66404-4211-4389-9ccc-9ae1f321d15b", 00:07:50.191 "assigned_rate_limits": { 00:07:50.191 "rw_ios_per_sec": 0, 00:07:50.191 "rw_mbytes_per_sec": 0, 00:07:50.191 "r_mbytes_per_sec": 0, 00:07:50.191 "w_mbytes_per_sec": 0 00:07:50.191 }, 00:07:50.191 "claimed": true, 00:07:50.191 "claim_type": "exclusive_write", 00:07:50.191 "zoned": false, 00:07:50.191 "supported_io_types": { 00:07:50.191 "read": true, 00:07:50.191 "write": true, 00:07:50.191 "unmap": true, 00:07:50.191 "write_zeroes": true, 00:07:50.191 "flush": true, 00:07:50.191 "reset": true, 00:07:50.191 "compare": false, 00:07:50.191 "compare_and_write": false, 00:07:50.191 "abort": true, 00:07:50.191 "nvme_admin": false, 00:07:50.191 "nvme_io": false 00:07:50.191 }, 00:07:50.191 "memory_domains": [ 00:07:50.191 { 00:07:50.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.191 "dma_device_type": 2 00:07:50.191 } 00:07:50.191 ], 00:07:50.191 "driver_specific": {} 00:07:50.191 }, 00:07:50.191 { 00:07:50.191 "name": "Passthru0", 00:07:50.191 "aliases": [ 00:07:50.191 "4c6a656f-5c30-58cf-b141-eaefbe498327" 00:07:50.191 ], 00:07:50.191 "product_name": "passthru", 00:07:50.191 "block_size": 512, 00:07:50.191 "num_blocks": 16384, 00:07:50.191 "uuid": "4c6a656f-5c30-58cf-b141-eaefbe498327", 00:07:50.191 "assigned_rate_limits": { 00:07:50.191 "rw_ios_per_sec": 0, 00:07:50.191 "rw_mbytes_per_sec": 0, 00:07:50.191 "r_mbytes_per_sec": 0, 00:07:50.191 "w_mbytes_per_sec": 0 00:07:50.191 }, 00:07:50.191 "claimed": false, 00:07:50.191 "zoned": false, 00:07:50.191 "supported_io_types": { 00:07:50.191 "read": true, 00:07:50.191 "write": true, 00:07:50.191 "unmap": true, 00:07:50.191 "write_zeroes": true, 00:07:50.191 "flush": true, 00:07:50.191 "reset": true, 00:07:50.191 "compare": false, 00:07:50.191 "compare_and_write": false, 00:07:50.191 "abort": true, 00:07:50.191 "nvme_admin": false, 00:07:50.191 "nvme_io": false 00:07:50.191 }, 00:07:50.191 "memory_domains": [ 00:07:50.191 { 00:07:50.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.191 "dma_device_type": 2 00:07:50.191 } 00:07:50.191 ], 00:07:50.191 "driver_specific": { 00:07:50.191 "passthru": { 00:07:50.191 "name": "Passthru0", 00:07:50.191 "base_bdev_name": "Malloc2" 00:07:50.191 } 00:07:50.191 } 00:07:50.191 } 00:07:50.191 ]' 00:07:50.191 12:53:08 -- rpc/rpc.sh@21 -- # jq length 00:07:50.191 12:53:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:50.191 12:53:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:50.191 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.191 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.191 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.191 12:53:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:50.191 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.191 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.191 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.191 12:53:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:50.191 12:53:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.191 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.191 12:53:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.191 12:53:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:50.191 12:53:08 -- rpc/rpc.sh@26 -- # jq length 00:07:50.191 12:53:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:50.191 00:07:50.191 real 0m0.330s 00:07:50.191 user 0m0.230s 00:07:50.191 sys 0m0.018s 00:07:50.191 ************************************ 00:07:50.191 END TEST rpc_daemon_integrity 00:07:50.191 ************************************ 00:07:50.191 12:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.191 12:53:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.191 12:53:09 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:50.191 12:53:09 -- rpc/rpc.sh@84 -- # killprocess 104997 00:07:50.191 12:53:09 -- common/autotest_common.sh@926 -- # '[' -z 104997 ']' 00:07:50.191 12:53:09 -- common/autotest_common.sh@930 -- # kill -0 104997 00:07:50.191 12:53:09 -- common/autotest_common.sh@931 -- # uname 00:07:50.191 12:53:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:50.191 12:53:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104997 00:07:50.191 12:53:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:50.191 12:53:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:50.191 killing process with pid 104997 00:07:50.191 12:53:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104997' 00:07:50.191 12:53:09 -- common/autotest_common.sh@945 -- # kill 104997 00:07:50.191 12:53:09 -- common/autotest_common.sh@950 -- # wait 104997 00:07:52.095 ************************************ 00:07:52.095 END TEST rpc 00:07:52.095 ************************************ 00:07:52.095 00:07:52.095 real 0m4.923s 00:07:52.095 user 0m5.869s 00:07:52.095 sys 0m0.662s 00:07:52.095 12:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.095 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:52.095 12:53:10 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:52.095 12:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:52.095 12:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.095 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:52.353 ************************************ 00:07:52.353 START TEST rpc_client 00:07:52.353 ************************************ 00:07:52.353 12:53:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:52.353 * Looking for test storage... 00:07:52.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:52.353 12:53:11 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:52.353 OK 00:07:52.353 12:53:11 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:52.353 ************************************ 00:07:52.353 END TEST rpc_client 00:07:52.353 ************************************ 00:07:52.353 00:07:52.353 real 0m0.141s 00:07:52.353 user 0m0.082s 00:07:52.353 sys 0m0.069s 00:07:52.353 12:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.353 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:07:52.353 12:53:11 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:52.353 12:53:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:52.353 12:53:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.353 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:07:52.353 ************************************ 00:07:52.353 START TEST json_config 00:07:52.353 ************************************ 00:07:52.353 12:53:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:52.353 12:53:11 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.353 12:53:11 -- nvmf/common.sh@7 -- # uname -s 00:07:52.353 12:53:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.353 12:53:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.353 12:53:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.353 12:53:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.353 12:53:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.353 12:53:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.353 12:53:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.353 12:53:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.353 12:53:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.353 12:53:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.353 12:53:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7049c16-bd87-4321-a921-288c2f2837a6 00:07:52.353 12:53:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7049c16-bd87-4321-a921-288c2f2837a6 00:07:52.353 12:53:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.353 12:53:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.353 12:53:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:52.353 12:53:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.353 12:53:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.353 12:53:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.353 12:53:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.353 12:53:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:52.353 12:53:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:52.353 12:53:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:52.353 12:53:11 -- paths/export.sh@5 -- # export PATH 00:07:52.353 12:53:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:52.353 12:53:11 -- nvmf/common.sh@46 -- # : 0 00:07:52.353 12:53:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:52.353 12:53:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:52.354 12:53:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:52.354 12:53:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.354 12:53:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.354 12:53:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:52.354 12:53:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:52.354 12:53:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:52.354 12:53:11 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:52.612 12:53:11 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:52.612 12:53:11 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:52.612 12:53:11 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:52.612 12:53:11 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:07:52.612 12:53:11 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:52.612 12:53:11 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:07:52.612 12:53:11 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:52.612 12:53:11 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:07:52.612 12:53:11 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:52.612 12:53:11 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:07:52.612 12:53:11 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:52.612 12:53:11 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:52.612 12:53:11 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:52.612 12:53:11 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:52.612 INFO: JSON configuration test init 00:07:52.612 12:53:11 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:52.612 12:53:11 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:52.612 12:53:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:52.612 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:07:52.612 12:53:11 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:52.612 12:53:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:52.612 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:07:52.612 12:53:11 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:52.612 12:53:11 -- json_config/json_config.sh@98 -- # local app=target 00:07:52.612 12:53:11 -- json_config/json_config.sh@99 -- # shift 00:07:52.612 12:53:11 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:52.612 12:53:11 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:52.612 12:53:11 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:52.612 12:53:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:52.612 12:53:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:52.612 12:53:11 -- json_config/json_config.sh@111 -- # app_pid[$app]=105315 00:07:52.612 12:53:11 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:52.612 Waiting for target to run... 00:07:52.612 12:53:11 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:52.612 12:53:11 -- json_config/json_config.sh@114 -- # waitforlisten 105315 /var/tmp/spdk_tgt.sock 00:07:52.612 12:53:11 -- common/autotest_common.sh@819 -- # '[' -z 105315 ']' 00:07:52.612 12:53:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:52.612 12:53:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:52.613 12:53:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:52.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:52.613 12:53:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:52.613 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:07:52.613 [2024-06-11 12:53:11.278432] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:52.613 [2024-06-11 12:53:11.279523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105315 ] 00:07:53.179 [2024-06-11 12:53:11.732828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.179 [2024-06-11 12:53:11.896278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.179 [2024-06-11 12:53:11.896680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.438 00:07:53.438 12:53:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:53.438 12:53:12 -- common/autotest_common.sh@852 -- # return 0 00:07:53.438 12:53:12 -- json_config/json_config.sh@115 -- # echo '' 00:07:53.438 12:53:12 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:53.438 12:53:12 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:53.438 12:53:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:53.438 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:07:53.438 12:53:12 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:53.438 12:53:12 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:53.438 12:53:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:53.438 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:07:53.438 12:53:12 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:53.438 12:53:12 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:53.438 12:53:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:54.373 12:53:13 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:54.373 12:53:13 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:54.373 12:53:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:54.373 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:54.373 12:53:13 -- json_config/json_config.sh@48 -- # local ret=0 00:07:54.373 12:53:13 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:07:54.373 12:53:13 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:54.373 12:53:13 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:07:54.373 12:53:13 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:54.373 12:53:13 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:54.373 12:53:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:54.631 12:53:13 -- json_config/json_config.sh@51 -- # local get_types 00:07:54.631 12:53:13 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:54.631 12:53:13 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:54.632 12:53:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:54.632 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 12:53:13 -- json_config/json_config.sh@58 -- # return 0 00:07:54.632 12:53:13 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:07:54.632 12:53:13 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:07:54.632 12:53:13 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:07:54.632 12:53:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:54.632 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 12:53:13 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:07:54.632 12:53:13 -- json_config/json_config.sh@160 -- # local expected_notifications 00:07:54.632 12:53:13 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:07:54.632 12:53:13 -- json_config/json_config.sh@164 -- # get_notifications 00:07:54.632 12:53:13 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:54.632 12:53:13 -- json_config/json_config.sh@64 -- # IFS=: 00:07:54.632 12:53:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:54.632 12:53:13 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:54.632 12:53:13 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:54.632 12:53:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:54.890 12:53:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:54.890 12:53:13 -- json_config/json_config.sh@64 -- # IFS=: 00:07:54.890 12:53:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:54.890 12:53:13 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:07:54.890 12:53:13 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:07:54.890 12:53:13 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:54.890 12:53:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:55.147 Nvme0n1p0 Nvme0n1p1 00:07:55.147 12:53:13 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:55.147 12:53:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:55.405 [2024-06-11 12:53:14.153041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:55.405 [2024-06-11 12:53:14.153329] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:55.405 00:07:55.405 12:53:14 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:55.405 12:53:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:55.662 Malloc3 00:07:55.662 12:53:14 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:55.662 12:53:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:55.920 [2024-06-11 12:53:14.622170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:55.920 [2024-06-11 12:53:14.622482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.920 [2024-06-11 12:53:14.622662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:55.920 [2024-06-11 12:53:14.622800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.920 [2024-06-11 12:53:14.625485] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.920 [2024-06-11 12:53:14.625673] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:55.920 PTBdevFromMalloc3 00:07:55.920 12:53:14 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:55.920 12:53:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:56.178 Null0 00:07:56.178 12:53:14 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:56.178 12:53:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:56.436 Malloc0 00:07:56.436 12:53:15 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:56.436 12:53:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:56.694 Malloc1 00:07:56.694 12:53:15 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:56.694 12:53:15 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:56.953 102400+0 records in 00:07:56.953 102400+0 records out 00:07:56.953 104857600 bytes (105 MB, 100 MiB) copied, 0.352765 s, 297 MB/s 00:07:56.953 12:53:15 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:56.953 12:53:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:57.212 aio_disk 00:07:57.212 12:53:15 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:57.212 12:53:16 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:57.212 12:53:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:57.470 a4159bc2-9f20-4722-94fe-ccc482538bd9 00:07:57.470 12:53:16 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:57.471 12:53:16 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:57.471 12:53:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:57.729 12:53:16 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:57.729 12:53:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:57.989 12:53:16 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:57.989 12:53:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:58.247 12:53:16 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:58.247 12:53:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:58.507 12:53:17 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:07:58.507 12:53:17 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:07:58.507 12:53:17 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:ea084deb-839a-4f60-bae7-50e85e8a7751 bdev_register:389bed2c-ae1b-4be3-96f0-6279564fab66 bdev_register:7fafc35e-2679-49c6-8db2-c428dcc6ce93 bdev_register:7779c635-56ab-4d7c-be12-b9a470e18ac9 00:07:58.507 12:53:17 -- json_config/json_config.sh@70 -- # local events_to_check 00:07:58.507 12:53:17 -- json_config/json_config.sh@71 -- # local recorded_events 00:07:58.507 12:53:17 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:58.507 12:53:17 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:ea084deb-839a-4f60-bae7-50e85e8a7751 bdev_register:389bed2c-ae1b-4be3-96f0-6279564fab66 bdev_register:7fafc35e-2679-49c6-8db2-c428dcc6ce93 bdev_register:7779c635-56ab-4d7c-be12-b9a470e18ac9 00:07:58.507 12:53:17 -- json_config/json_config.sh@74 -- # sort 00:07:58.507 12:53:17 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:07:58.507 12:53:17 -- json_config/json_config.sh@75 -- # get_notifications 00:07:58.507 12:53:17 -- json_config/json_config.sh@75 -- # sort 00:07:58.507 12:53:17 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:58.507 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.507 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.507 12:53:17 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:58.507 12:53:17 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:58.507 12:53:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:ea084deb-839a-4f60-bae7-50e85e8a7751 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:389bed2c-ae1b-4be3-96f0-6279564fab66 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:7fafc35e-2679-49c6-8db2-c428dcc6ce93 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@65 -- # echo bdev_register:7779c635-56ab-4d7c-be12-b9a470e18ac9 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # IFS=: 00:07:58.787 12:53:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:58.787 12:53:17 -- json_config/json_config.sh@77 -- # [[ bdev_register:389bed2c-ae1b-4be3-96f0-6279564fab66 bdev_register:7779c635-56ab-4d7c-be12-b9a470e18ac9 bdev_register:7fafc35e-2679-49c6-8db2-c428dcc6ce93 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:ea084deb-839a-4f60-bae7-50e85e8a7751 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\8\9\b\e\d\2\c\-\a\e\1\b\-\4\b\e\3\-\9\6\f\0\-\6\2\7\9\5\6\4\f\a\b\6\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\7\7\9\c\6\3\5\-\5\6\a\b\-\4\d\7\c\-\b\e\1\2\-\b\9\a\4\7\0\e\1\8\a\c\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\f\a\f\c\3\5\e\-\2\6\7\9\-\4\9\c\6\-\8\d\b\2\-\c\4\2\8\d\c\c\6\c\e\9\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\a\0\8\4\d\e\b\-\8\3\9\a\-\4\f\6\0\-\b\a\e\7\-\5\0\e\8\5\e\8\a\7\7\5\1 ]] 00:07:58.787 12:53:17 -- json_config/json_config.sh@89 -- # cat 00:07:58.787 12:53:17 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:389bed2c-ae1b-4be3-96f0-6279564fab66 bdev_register:7779c635-56ab-4d7c-be12-b9a470e18ac9 bdev_register:7fafc35e-2679-49c6-8db2-c428dcc6ce93 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:ea084deb-839a-4f60-bae7-50e85e8a7751 00:07:58.787 Expected events matched: 00:07:58.787 bdev_register:389bed2c-ae1b-4be3-96f0-6279564fab66 00:07:58.788 bdev_register:7779c635-56ab-4d7c-be12-b9a470e18ac9 00:07:58.788 bdev_register:7fafc35e-2679-49c6-8db2-c428dcc6ce93 00:07:58.788 bdev_register:Malloc0 00:07:58.788 bdev_register:Malloc0p0 00:07:58.788 bdev_register:Malloc0p1 00:07:58.788 bdev_register:Malloc0p2 00:07:58.788 bdev_register:Malloc1 00:07:58.788 bdev_register:Malloc3 00:07:58.788 bdev_register:Null0 00:07:58.788 bdev_register:Nvme0n1 00:07:58.788 bdev_register:Nvme0n1p0 00:07:58.788 bdev_register:Nvme0n1p1 00:07:58.788 bdev_register:PTBdevFromMalloc3 00:07:58.788 bdev_register:aio_disk 00:07:58.788 bdev_register:ea084deb-839a-4f60-bae7-50e85e8a7751 00:07:58.788 12:53:17 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:07:58.788 12:53:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:58.788 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:07:58.788 12:53:17 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:58.788 12:53:17 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:58.788 12:53:17 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:07:58.788 12:53:17 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:58.788 12:53:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:58.788 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:07:58.788 12:53:17 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:58.788 12:53:17 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:58.788 12:53:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:59.086 MallocBdevForConfigChangeCheck 00:07:59.086 12:53:17 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:59.086 12:53:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:59.086 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:07:59.086 12:53:17 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:59.086 12:53:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:59.652 INFO: shutting down applications... 00:07:59.652 12:53:18 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:59.652 12:53:18 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:59.652 12:53:18 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:59.652 12:53:18 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:59.652 12:53:18 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:59.652 [2024-06-11 12:53:18.459151] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:59.909 Calling clear_vhost_scsi_subsystem 00:07:59.909 Calling clear_iscsi_subsystem 00:07:59.909 Calling clear_vhost_blk_subsystem 00:07:59.909 Calling clear_nbd_subsystem 00:07:59.909 Calling clear_nvmf_subsystem 00:07:59.909 Calling clear_bdev_subsystem 00:07:59.909 Calling clear_accel_subsystem 00:07:59.909 Calling clear_iobuf_subsystem 00:07:59.909 Calling clear_sock_subsystem 00:07:59.909 Calling clear_vmd_subsystem 00:07:59.909 Calling clear_scheduler_subsystem 00:07:59.909 12:53:18 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:59.909 12:53:18 -- json_config/json_config.sh@396 -- # count=100 00:07:59.909 12:53:18 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:59.909 12:53:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:59.909 12:53:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:59.909 12:53:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:00.477 12:53:19 -- json_config/json_config.sh@398 -- # break 00:08:00.477 12:53:19 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:00.477 12:53:19 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:00.477 12:53:19 -- json_config/json_config.sh@120 -- # local app=target 00:08:00.477 12:53:19 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:00.477 12:53:19 -- json_config/json_config.sh@124 -- # [[ -n 105315 ]] 00:08:00.477 12:53:19 -- json_config/json_config.sh@127 -- # kill -SIGINT 105315 00:08:00.477 12:53:19 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:00.477 12:53:19 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:00.477 12:53:19 -- json_config/json_config.sh@130 -- # kill -0 105315 00:08:00.477 12:53:19 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:00.735 12:53:19 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:00.735 12:53:19 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:00.735 12:53:19 -- json_config/json_config.sh@130 -- # kill -0 105315 00:08:00.735 12:53:19 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:01.303 SPDK target shutdown done 00:08:01.303 INFO: relaunching applications... 00:08:01.303 Waiting for target to run... 00:08:01.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:01.303 12:53:20 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:01.303 12:53:20 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:01.303 12:53:20 -- json_config/json_config.sh@130 -- # kill -0 105315 00:08:01.303 12:53:20 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:01.303 12:53:20 -- json_config/json_config.sh@132 -- # break 00:08:01.303 12:53:20 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:01.303 12:53:20 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:01.303 12:53:20 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:01.303 12:53:20 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:01.303 12:53:20 -- json_config/json_config.sh@98 -- # local app=target 00:08:01.303 12:53:20 -- json_config/json_config.sh@99 -- # shift 00:08:01.303 12:53:20 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:01.303 12:53:20 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:01.303 12:53:20 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:01.303 12:53:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:01.303 12:53:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:01.303 12:53:20 -- json_config/json_config.sh@111 -- # app_pid[$app]=105595 00:08:01.303 12:53:20 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:01.303 12:53:20 -- json_config/json_config.sh@114 -- # waitforlisten 105595 /var/tmp/spdk_tgt.sock 00:08:01.303 12:53:20 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:01.303 12:53:20 -- common/autotest_common.sh@819 -- # '[' -z 105595 ']' 00:08:01.303 12:53:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:01.303 12:53:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.303 12:53:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:01.303 12:53:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.303 12:53:20 -- common/autotest_common.sh@10 -- # set +x 00:08:01.303 [2024-06-11 12:53:20.116186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:01.303 [2024-06-11 12:53:20.116671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105595 ] 00:08:01.870 [2024-06-11 12:53:20.576963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.128 [2024-06-11 12:53:20.738747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:02.128 [2024-06-11 12:53:20.739121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.695 [2024-06-11 12:53:21.319108] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:02.695 [2024-06-11 12:53:21.319490] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:02.695 [2024-06-11 12:53:21.327057] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:02.695 [2024-06-11 12:53:21.327286] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:02.695 [2024-06-11 12:53:21.335111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:02.695 [2024-06-11 12:53:21.335331] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:02.695 [2024-06-11 12:53:21.335455] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:02.696 [2024-06-11 12:53:21.425055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:02.696 [2024-06-11 12:53:21.425288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.696 [2024-06-11 12:53:21.425362] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:02.696 [2024-06-11 12:53:21.425617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.696 [2024-06-11 12:53:21.426183] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.696 [2024-06-11 12:53:21.426409] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:02.954 00:08:02.954 INFO: Checking if target configuration is the same... 00:08:02.954 12:53:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:02.954 12:53:21 -- common/autotest_common.sh@852 -- # return 0 00:08:02.954 12:53:21 -- json_config/json_config.sh@115 -- # echo '' 00:08:02.954 12:53:21 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:02.954 12:53:21 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:02.954 12:53:21 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:02.954 12:53:21 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:02.954 12:53:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:02.954 + '[' 2 -ne 2 ']' 00:08:02.954 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:02.954 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:02.954 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:02.954 +++ basename /dev/fd/62 00:08:03.213 ++ mktemp /tmp/62.XXX 00:08:03.213 + tmp_file_1=/tmp/62.jax 00:08:03.213 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:03.213 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:03.213 + tmp_file_2=/tmp/spdk_tgt_config.json.gfE 00:08:03.213 + ret=0 00:08:03.213 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:03.476 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:03.476 + diff -u /tmp/62.jax /tmp/spdk_tgt_config.json.gfE 00:08:03.476 INFO: JSON config files are the same 00:08:03.476 + echo 'INFO: JSON config files are the same' 00:08:03.476 + rm /tmp/62.jax /tmp/spdk_tgt_config.json.gfE 00:08:03.476 + exit 0 00:08:03.476 INFO: changing configuration and checking if this can be detected... 00:08:03.476 12:53:22 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:03.476 12:53:22 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:03.476 12:53:22 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:03.476 12:53:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:03.733 12:53:22 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:03.733 12:53:22 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:03.733 12:53:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:03.733 + '[' 2 -ne 2 ']' 00:08:03.733 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:03.733 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:03.733 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:03.733 +++ basename /dev/fd/62 00:08:03.733 ++ mktemp /tmp/62.XXX 00:08:03.733 + tmp_file_1=/tmp/62.fFK 00:08:03.733 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:03.733 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:03.733 + tmp_file_2=/tmp/spdk_tgt_config.json.yH6 00:08:03.734 + ret=0 00:08:03.734 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:03.992 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:03.992 + diff -u /tmp/62.fFK /tmp/spdk_tgt_config.json.yH6 00:08:03.992 + ret=1 00:08:03.992 + echo '=== Start of file: /tmp/62.fFK ===' 00:08:03.992 + cat /tmp/62.fFK 00:08:03.992 + echo '=== End of file: /tmp/62.fFK ===' 00:08:03.992 + echo '' 00:08:03.992 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yH6 ===' 00:08:03.992 + cat /tmp/spdk_tgt_config.json.yH6 00:08:03.992 + echo '=== End of file: /tmp/spdk_tgt_config.json.yH6 ===' 00:08:03.992 + echo '' 00:08:03.992 + rm /tmp/62.fFK /tmp/spdk_tgt_config.json.yH6 00:08:04.250 + exit 1 00:08:04.250 INFO: configuration change detected. 00:08:04.250 12:53:22 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:04.250 12:53:22 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:04.250 12:53:22 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:04.250 12:53:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:04.250 12:53:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.250 12:53:22 -- json_config/json_config.sh@360 -- # local ret=0 00:08:04.250 12:53:22 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:04.250 12:53:22 -- json_config/json_config.sh@370 -- # [[ -n 105595 ]] 00:08:04.250 12:53:22 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:04.250 12:53:22 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:04.250 12:53:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:04.250 12:53:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.250 12:53:22 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:04.250 12:53:22 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:04.250 12:53:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:04.508 12:53:23 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:04.508 12:53:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:04.508 12:53:23 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:04.508 12:53:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:04.767 12:53:23 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:04.767 12:53:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:05.026 12:53:23 -- json_config/json_config.sh@246 -- # uname -s 00:08:05.026 12:53:23 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:05.026 12:53:23 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:05.026 12:53:23 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:05.026 12:53:23 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:05.026 12:53:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:05.026 12:53:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.026 12:53:23 -- json_config/json_config.sh@376 -- # killprocess 105595 00:08:05.026 12:53:23 -- common/autotest_common.sh@926 -- # '[' -z 105595 ']' 00:08:05.026 12:53:23 -- common/autotest_common.sh@930 -- # kill -0 105595 00:08:05.026 12:53:23 -- common/autotest_common.sh@931 -- # uname 00:08:05.026 12:53:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:05.026 12:53:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105595 00:08:05.026 killing process with pid 105595 00:08:05.026 12:53:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:05.026 12:53:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:05.026 12:53:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105595' 00:08:05.026 12:53:23 -- common/autotest_common.sh@945 -- # kill 105595 00:08:05.026 12:53:23 -- common/autotest_common.sh@950 -- # wait 105595 00:08:05.961 12:53:24 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:05.961 12:53:24 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:05.961 12:53:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:05.961 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:05.961 12:53:24 -- json_config/json_config.sh@381 -- # return 0 00:08:05.961 12:53:24 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:05.961 INFO: Success 00:08:05.961 ************************************ 00:08:05.961 END TEST json_config 00:08:05.961 ************************************ 00:08:05.961 00:08:05.961 real 0m13.541s 00:08:05.961 user 0m20.040s 00:08:05.961 sys 0m2.170s 00:08:05.961 12:53:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.961 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:05.961 12:53:24 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:05.962 12:53:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:05.962 12:53:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.962 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:05.962 ************************************ 00:08:05.962 START TEST json_config_extra_key 00:08:05.962 ************************************ 00:08:05.962 12:53:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.962 12:53:24 -- nvmf/common.sh@7 -- # uname -s 00:08:05.962 12:53:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.962 12:53:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.962 12:53:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.962 12:53:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.962 12:53:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.962 12:53:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.962 12:53:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.962 12:53:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.962 12:53:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.962 12:53:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.962 12:53:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2c655f35-1162-4d57-b795-308c36f2c48f 00:08:05.962 12:53:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=2c655f35-1162-4d57-b795-308c36f2c48f 00:08:05.962 12:53:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.962 12:53:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.962 12:53:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:05.962 12:53:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.962 12:53:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.962 12:53:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.962 12:53:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.962 12:53:24 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:05.962 12:53:24 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:05.962 12:53:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:05.962 12:53:24 -- paths/export.sh@5 -- # export PATH 00:08:05.962 12:53:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:05.962 12:53:24 -- nvmf/common.sh@46 -- # : 0 00:08:05.962 12:53:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:05.962 12:53:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:05.962 12:53:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:05.962 12:53:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.962 12:53:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.962 12:53:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:05.962 12:53:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:05.962 12:53:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:05.962 INFO: launching applications... 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=105765 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:05.962 Waiting for target to run... 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:05.962 12:53:24 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 105765 /var/tmp/spdk_tgt.sock 00:08:05.962 12:53:24 -- common/autotest_common.sh@819 -- # '[' -z 105765 ']' 00:08:05.962 12:53:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:05.962 12:53:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:05.962 12:53:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:05.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:05.962 12:53:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:05.962 12:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:06.221 [2024-06-11 12:53:24.888511] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:06.221 [2024-06-11 12:53:24.889121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105765 ] 00:08:06.788 [2024-06-11 12:53:25.349017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.788 [2024-06-11 12:53:25.507764] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:06.788 [2024-06-11 12:53:25.508282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.184 00:08:08.184 INFO: shutting down applications... 00:08:08.184 12:53:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:08.184 12:53:26 -- common/autotest_common.sh@852 -- # return 0 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 105765 ]] 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 105765 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105765 00:08:08.184 12:53:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:08.457 12:53:27 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:08.457 12:53:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:08.457 12:53:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105765 00:08:08.457 12:53:27 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:09.025 12:53:27 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:09.025 12:53:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:09.025 12:53:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105765 00:08:09.025 12:53:27 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:09.283 12:53:28 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:09.283 12:53:28 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:09.283 12:53:28 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105765 00:08:09.283 12:53:28 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:09.850 12:53:28 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:09.850 12:53:28 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:09.850 12:53:28 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105765 00:08:09.850 12:53:28 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:10.417 SPDK target shutdown done 00:08:10.417 Success 00:08:10.417 12:53:29 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:10.417 12:53:29 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:10.417 12:53:29 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105765 00:08:10.417 12:53:29 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:10.417 12:53:29 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:10.417 12:53:29 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:10.417 12:53:29 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:10.417 12:53:29 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:10.417 ************************************ 00:08:10.417 END TEST json_config_extra_key 00:08:10.417 ************************************ 00:08:10.417 00:08:10.417 real 0m4.373s 00:08:10.417 user 0m4.119s 00:08:10.417 sys 0m0.562s 00:08:10.417 12:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.417 12:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:10.417 12:53:29 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:10.417 12:53:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.417 12:53:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.417 12:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:10.417 ************************************ 00:08:10.417 START TEST alias_rpc 00:08:10.417 ************************************ 00:08:10.418 12:53:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:10.418 * Looking for test storage... 00:08:10.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:10.418 12:53:29 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:10.418 12:53:29 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=105908 00:08:10.418 12:53:29 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:10.418 12:53:29 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 105908 00:08:10.418 12:53:29 -- common/autotest_common.sh@819 -- # '[' -z 105908 ']' 00:08:10.418 12:53:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.418 12:53:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:10.418 12:53:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.418 12:53:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:10.418 12:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:10.676 [2024-06-11 12:53:29.295659] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:10.676 [2024-06-11 12:53:29.296128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105908 ] 00:08:10.676 [2024-06-11 12:53:29.484986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.935 [2024-06-11 12:53:29.670423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:10.935 [2024-06-11 12:53:29.670907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.312 12:53:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:12.312 12:53:30 -- common/autotest_common.sh@852 -- # return 0 00:08:12.312 12:53:30 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:12.312 12:53:31 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 105908 00:08:12.312 12:53:31 -- common/autotest_common.sh@926 -- # '[' -z 105908 ']' 00:08:12.312 12:53:31 -- common/autotest_common.sh@930 -- # kill -0 105908 00:08:12.312 12:53:31 -- common/autotest_common.sh@931 -- # uname 00:08:12.312 12:53:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:12.312 12:53:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105908 00:08:12.312 killing process with pid 105908 00:08:12.312 12:53:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:12.312 12:53:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:12.312 12:53:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105908' 00:08:12.312 12:53:31 -- common/autotest_common.sh@945 -- # kill 105908 00:08:12.313 12:53:31 -- common/autotest_common.sh@950 -- # wait 105908 00:08:14.216 ************************************ 00:08:14.216 END TEST alias_rpc 00:08:14.216 ************************************ 00:08:14.216 00:08:14.216 real 0m3.818s 00:08:14.216 user 0m3.982s 00:08:14.216 sys 0m0.568s 00:08:14.216 12:53:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.216 12:53:32 -- common/autotest_common.sh@10 -- # set +x 00:08:14.216 12:53:32 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:14.216 12:53:32 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:14.216 12:53:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:14.216 12:53:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.216 12:53:32 -- common/autotest_common.sh@10 -- # set +x 00:08:14.216 ************************************ 00:08:14.216 START TEST spdkcli_tcp 00:08:14.216 ************************************ 00:08:14.216 12:53:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:14.475 * Looking for test storage... 00:08:14.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:14.475 12:53:33 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:14.475 12:53:33 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:14.475 12:53:33 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:14.475 12:53:33 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:14.475 12:53:33 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:14.475 12:53:33 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:14.475 12:53:33 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:14.475 12:53:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:14.475 12:53:33 -- common/autotest_common.sh@10 -- # set +x 00:08:14.475 12:53:33 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=106010 00:08:14.475 12:53:33 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:14.475 12:53:33 -- spdkcli/tcp.sh@27 -- # waitforlisten 106010 00:08:14.475 12:53:33 -- common/autotest_common.sh@819 -- # '[' -z 106010 ']' 00:08:14.475 12:53:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.475 12:53:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:14.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.475 12:53:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.475 12:53:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:14.475 12:53:33 -- common/autotest_common.sh@10 -- # set +x 00:08:14.475 [2024-06-11 12:53:33.162157] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:14.475 [2024-06-11 12:53:33.162527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106010 ] 00:08:14.734 [2024-06-11 12:53:33.333672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.734 [2024-06-11 12:53:33.512786] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.734 [2024-06-11 12:53:33.513610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.734 [2024-06-11 12:53:33.513613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.110 12:53:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:16.110 12:53:34 -- common/autotest_common.sh@852 -- # return 0 00:08:16.110 12:53:34 -- spdkcli/tcp.sh@31 -- # socat_pid=106046 00:08:16.110 12:53:34 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:16.110 12:53:34 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:16.369 [ 00:08:16.369 "spdk_get_version", 00:08:16.369 "rpc_get_methods", 00:08:16.369 "trace_get_info", 00:08:16.369 "trace_get_tpoint_group_mask", 00:08:16.369 "trace_disable_tpoint_group", 00:08:16.369 "trace_enable_tpoint_group", 00:08:16.369 "trace_clear_tpoint_mask", 00:08:16.369 "trace_set_tpoint_mask", 00:08:16.369 "framework_get_pci_devices", 00:08:16.369 "framework_get_config", 00:08:16.369 "framework_get_subsystems", 00:08:16.369 "iobuf_get_stats", 00:08:16.369 "iobuf_set_options", 00:08:16.369 "sock_set_default_impl", 00:08:16.369 "sock_impl_set_options", 00:08:16.369 "sock_impl_get_options", 00:08:16.369 "vmd_rescan", 00:08:16.369 "vmd_remove_device", 00:08:16.369 "vmd_enable", 00:08:16.369 "accel_get_stats", 00:08:16.369 "accel_set_options", 00:08:16.369 "accel_set_driver", 00:08:16.369 "accel_crypto_key_destroy", 00:08:16.369 "accel_crypto_keys_get", 00:08:16.369 "accel_crypto_key_create", 00:08:16.369 "accel_assign_opc", 00:08:16.369 "accel_get_module_info", 00:08:16.369 "accel_get_opc_assignments", 00:08:16.369 "notify_get_notifications", 00:08:16.369 "notify_get_types", 00:08:16.369 "bdev_get_histogram", 00:08:16.369 "bdev_enable_histogram", 00:08:16.369 "bdev_set_qos_limit", 00:08:16.369 "bdev_set_qd_sampling_period", 00:08:16.369 "bdev_get_bdevs", 00:08:16.369 "bdev_reset_iostat", 00:08:16.369 "bdev_get_iostat", 00:08:16.369 "bdev_examine", 00:08:16.369 "bdev_wait_for_examine", 00:08:16.369 "bdev_set_options", 00:08:16.369 "scsi_get_devices", 00:08:16.369 "thread_set_cpumask", 00:08:16.369 "framework_get_scheduler", 00:08:16.369 "framework_set_scheduler", 00:08:16.369 "framework_get_reactors", 00:08:16.369 "thread_get_io_channels", 00:08:16.369 "thread_get_pollers", 00:08:16.369 "thread_get_stats", 00:08:16.369 "framework_monitor_context_switch", 00:08:16.369 "spdk_kill_instance", 00:08:16.369 "log_enable_timestamps", 00:08:16.369 "log_get_flags", 00:08:16.369 "log_clear_flag", 00:08:16.369 "log_set_flag", 00:08:16.369 "log_get_level", 00:08:16.369 "log_set_level", 00:08:16.369 "log_get_print_level", 00:08:16.369 "log_set_print_level", 00:08:16.369 "framework_enable_cpumask_locks", 00:08:16.369 "framework_disable_cpumask_locks", 00:08:16.369 "framework_wait_init", 00:08:16.369 "framework_start_init", 00:08:16.369 "virtio_blk_create_transport", 00:08:16.369 "virtio_blk_get_transports", 00:08:16.369 "vhost_controller_set_coalescing", 00:08:16.369 "vhost_get_controllers", 00:08:16.369 "vhost_delete_controller", 00:08:16.369 "vhost_create_blk_controller", 00:08:16.369 "vhost_scsi_controller_remove_target", 00:08:16.369 "vhost_scsi_controller_add_target", 00:08:16.369 "vhost_start_scsi_controller", 00:08:16.369 "vhost_create_scsi_controller", 00:08:16.369 "nbd_get_disks", 00:08:16.369 "nbd_stop_disk", 00:08:16.369 "nbd_start_disk", 00:08:16.369 "env_dpdk_get_mem_stats", 00:08:16.369 "nvmf_subsystem_get_listeners", 00:08:16.369 "nvmf_subsystem_get_qpairs", 00:08:16.369 "nvmf_subsystem_get_controllers", 00:08:16.369 "nvmf_get_stats", 00:08:16.369 "nvmf_get_transports", 00:08:16.369 "nvmf_create_transport", 00:08:16.369 "nvmf_get_targets", 00:08:16.369 "nvmf_delete_target", 00:08:16.369 "nvmf_create_target", 00:08:16.369 "nvmf_subsystem_allow_any_host", 00:08:16.369 "nvmf_subsystem_remove_host", 00:08:16.369 "nvmf_subsystem_add_host", 00:08:16.369 "nvmf_subsystem_remove_ns", 00:08:16.369 "nvmf_subsystem_add_ns", 00:08:16.369 "nvmf_subsystem_listener_set_ana_state", 00:08:16.369 "nvmf_discovery_get_referrals", 00:08:16.369 "nvmf_discovery_remove_referral", 00:08:16.369 "nvmf_discovery_add_referral", 00:08:16.369 "nvmf_subsystem_remove_listener", 00:08:16.369 "nvmf_subsystem_add_listener", 00:08:16.369 "nvmf_delete_subsystem", 00:08:16.369 "nvmf_create_subsystem", 00:08:16.369 "nvmf_get_subsystems", 00:08:16.369 "nvmf_set_crdt", 00:08:16.369 "nvmf_set_config", 00:08:16.369 "nvmf_set_max_subsystems", 00:08:16.369 "iscsi_set_options", 00:08:16.369 "iscsi_get_auth_groups", 00:08:16.369 "iscsi_auth_group_remove_secret", 00:08:16.369 "iscsi_auth_group_add_secret", 00:08:16.369 "iscsi_delete_auth_group", 00:08:16.369 "iscsi_create_auth_group", 00:08:16.369 "iscsi_set_discovery_auth", 00:08:16.369 "iscsi_get_options", 00:08:16.369 "iscsi_target_node_request_logout", 00:08:16.369 "iscsi_target_node_set_redirect", 00:08:16.369 "iscsi_target_node_set_auth", 00:08:16.369 "iscsi_target_node_add_lun", 00:08:16.369 "iscsi_get_connections", 00:08:16.369 "iscsi_portal_group_set_auth", 00:08:16.369 "iscsi_start_portal_group", 00:08:16.369 "iscsi_delete_portal_group", 00:08:16.369 "iscsi_create_portal_group", 00:08:16.369 "iscsi_get_portal_groups", 00:08:16.369 "iscsi_delete_target_node", 00:08:16.369 "iscsi_target_node_remove_pg_ig_maps", 00:08:16.369 "iscsi_target_node_add_pg_ig_maps", 00:08:16.369 "iscsi_create_target_node", 00:08:16.369 "iscsi_get_target_nodes", 00:08:16.369 "iscsi_delete_initiator_group", 00:08:16.369 "iscsi_initiator_group_remove_initiators", 00:08:16.369 "iscsi_initiator_group_add_initiators", 00:08:16.369 "iscsi_create_initiator_group", 00:08:16.369 "iscsi_get_initiator_groups", 00:08:16.369 "iaa_scan_accel_module", 00:08:16.369 "dsa_scan_accel_module", 00:08:16.369 "ioat_scan_accel_module", 00:08:16.369 "accel_error_inject_error", 00:08:16.369 "bdev_iscsi_delete", 00:08:16.369 "bdev_iscsi_create", 00:08:16.369 "bdev_iscsi_set_options", 00:08:16.369 "bdev_virtio_attach_controller", 00:08:16.369 "bdev_virtio_scsi_get_devices", 00:08:16.369 "bdev_virtio_detach_controller", 00:08:16.369 "bdev_virtio_blk_set_hotplug", 00:08:16.369 "bdev_ftl_set_property", 00:08:16.369 "bdev_ftl_get_properties", 00:08:16.369 "bdev_ftl_get_stats", 00:08:16.369 "bdev_ftl_unmap", 00:08:16.369 "bdev_ftl_unload", 00:08:16.369 "bdev_ftl_delete", 00:08:16.369 "bdev_ftl_load", 00:08:16.369 "bdev_ftl_create", 00:08:16.369 "bdev_aio_delete", 00:08:16.369 "bdev_aio_rescan", 00:08:16.369 "bdev_aio_create", 00:08:16.369 "blobfs_create", 00:08:16.369 "blobfs_detect", 00:08:16.369 "blobfs_set_cache_size", 00:08:16.369 "bdev_zone_block_delete", 00:08:16.369 "bdev_zone_block_create", 00:08:16.369 "bdev_delay_delete", 00:08:16.369 "bdev_delay_create", 00:08:16.369 "bdev_delay_update_latency", 00:08:16.369 "bdev_split_delete", 00:08:16.369 "bdev_split_create", 00:08:16.369 "bdev_error_inject_error", 00:08:16.369 "bdev_error_delete", 00:08:16.369 "bdev_error_create", 00:08:16.369 "bdev_raid_set_options", 00:08:16.369 "bdev_raid_remove_base_bdev", 00:08:16.369 "bdev_raid_add_base_bdev", 00:08:16.369 "bdev_raid_delete", 00:08:16.369 "bdev_raid_create", 00:08:16.369 "bdev_raid_get_bdevs", 00:08:16.369 "bdev_lvol_grow_lvstore", 00:08:16.369 "bdev_lvol_get_lvols", 00:08:16.369 "bdev_lvol_get_lvstores", 00:08:16.369 "bdev_lvol_delete", 00:08:16.369 "bdev_lvol_set_read_only", 00:08:16.369 "bdev_lvol_resize", 00:08:16.369 "bdev_lvol_decouple_parent", 00:08:16.369 "bdev_lvol_inflate", 00:08:16.369 "bdev_lvol_rename", 00:08:16.369 "bdev_lvol_clone_bdev", 00:08:16.369 "bdev_lvol_clone", 00:08:16.369 "bdev_lvol_snapshot", 00:08:16.369 "bdev_lvol_create", 00:08:16.369 "bdev_lvol_delete_lvstore", 00:08:16.369 "bdev_lvol_rename_lvstore", 00:08:16.369 "bdev_lvol_create_lvstore", 00:08:16.369 "bdev_passthru_delete", 00:08:16.369 "bdev_passthru_create", 00:08:16.369 "bdev_nvme_cuse_unregister", 00:08:16.369 "bdev_nvme_cuse_register", 00:08:16.369 "bdev_opal_new_user", 00:08:16.369 "bdev_opal_set_lock_state", 00:08:16.369 "bdev_opal_delete", 00:08:16.369 "bdev_opal_get_info", 00:08:16.370 "bdev_opal_create", 00:08:16.370 "bdev_nvme_opal_revert", 00:08:16.370 "bdev_nvme_opal_init", 00:08:16.370 "bdev_nvme_send_cmd", 00:08:16.370 "bdev_nvme_get_path_iostat", 00:08:16.370 "bdev_nvme_get_mdns_discovery_info", 00:08:16.370 "bdev_nvme_stop_mdns_discovery", 00:08:16.370 "bdev_nvme_start_mdns_discovery", 00:08:16.370 "bdev_nvme_set_multipath_policy", 00:08:16.370 "bdev_nvme_set_preferred_path", 00:08:16.370 "bdev_nvme_get_io_paths", 00:08:16.370 "bdev_nvme_remove_error_injection", 00:08:16.370 "bdev_nvme_add_error_injection", 00:08:16.370 "bdev_nvme_get_discovery_info", 00:08:16.370 "bdev_nvme_stop_discovery", 00:08:16.370 "bdev_nvme_start_discovery", 00:08:16.370 "bdev_nvme_get_controller_health_info", 00:08:16.370 "bdev_nvme_disable_controller", 00:08:16.370 "bdev_nvme_enable_controller", 00:08:16.370 "bdev_nvme_reset_controller", 00:08:16.370 "bdev_nvme_get_transport_statistics", 00:08:16.370 "bdev_nvme_apply_firmware", 00:08:16.370 "bdev_nvme_detach_controller", 00:08:16.370 "bdev_nvme_get_controllers", 00:08:16.370 "bdev_nvme_attach_controller", 00:08:16.370 "bdev_nvme_set_hotplug", 00:08:16.370 "bdev_nvme_set_options", 00:08:16.370 "bdev_null_resize", 00:08:16.370 "bdev_null_delete", 00:08:16.370 "bdev_null_create", 00:08:16.370 "bdev_malloc_delete", 00:08:16.370 "bdev_malloc_create" 00:08:16.370 ] 00:08:16.370 12:53:35 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:16.370 12:53:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:16.370 12:53:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.370 12:53:35 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:16.370 12:53:35 -- spdkcli/tcp.sh@38 -- # killprocess 106010 00:08:16.370 12:53:35 -- common/autotest_common.sh@926 -- # '[' -z 106010 ']' 00:08:16.370 12:53:35 -- common/autotest_common.sh@930 -- # kill -0 106010 00:08:16.370 12:53:35 -- common/autotest_common.sh@931 -- # uname 00:08:16.370 12:53:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:16.370 12:53:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106010 00:08:16.370 12:53:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:16.370 killing process with pid 106010 00:08:16.370 12:53:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:16.370 12:53:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106010' 00:08:16.370 12:53:35 -- common/autotest_common.sh@945 -- # kill 106010 00:08:16.370 12:53:35 -- common/autotest_common.sh@950 -- # wait 106010 00:08:18.302 ************************************ 00:08:18.302 END TEST spdkcli_tcp 00:08:18.302 ************************************ 00:08:18.302 00:08:18.302 real 0m4.044s 00:08:18.302 user 0m7.586s 00:08:18.302 sys 0m0.525s 00:08:18.302 12:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.302 12:53:37 -- common/autotest_common.sh@10 -- # set +x 00:08:18.302 12:53:37 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:18.302 12:53:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:18.302 12:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.302 12:53:37 -- common/autotest_common.sh@10 -- # set +x 00:08:18.302 ************************************ 00:08:18.302 START TEST dpdk_mem_utility 00:08:18.302 ************************************ 00:08:18.302 12:53:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:18.561 * Looking for test storage... 00:08:18.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:18.561 12:53:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:18.561 12:53:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=106136 00:08:18.561 12:53:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:18.561 12:53:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 106136 00:08:18.561 12:53:37 -- common/autotest_common.sh@819 -- # '[' -z 106136 ']' 00:08:18.561 12:53:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.561 12:53:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:18.561 12:53:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.561 12:53:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:18.561 12:53:37 -- common/autotest_common.sh@10 -- # set +x 00:08:18.561 [2024-06-11 12:53:37.253122] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:18.561 [2024-06-11 12:53:37.253594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106136 ] 00:08:18.819 [2024-06-11 12:53:37.420973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.819 [2024-06-11 12:53:37.615299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:18.819 [2024-06-11 12:53:37.615713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.199 12:53:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:20.199 12:53:38 -- common/autotest_common.sh@852 -- # return 0 00:08:20.199 12:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:20.199 12:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:20.199 12:53:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.199 12:53:38 -- common/autotest_common.sh@10 -- # set +x 00:08:20.199 { 00:08:20.199 "filename": "/tmp/spdk_mem_dump.txt" 00:08:20.199 } 00:08:20.199 12:53:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.199 12:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:20.199 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:20.199 1 heaps totaling size 820.000000 MiB 00:08:20.199 size: 820.000000 MiB heap id: 0 00:08:20.199 end heaps---------- 00:08:20.199 8 mempools totaling size 598.116089 MiB 00:08:20.199 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:20.199 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:20.199 size: 84.521057 MiB name: bdev_io_106136 00:08:20.199 size: 51.011292 MiB name: evtpool_106136 00:08:20.199 size: 50.003479 MiB name: msgpool_106136 00:08:20.199 size: 21.763794 MiB name: PDU_Pool 00:08:20.199 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:20.199 size: 0.026123 MiB name: Session_Pool 00:08:20.199 end mempools------- 00:08:20.199 6 memzones totaling size 4.142822 MiB 00:08:20.199 size: 1.000366 MiB name: RG_ring_0_106136 00:08:20.199 size: 1.000366 MiB name: RG_ring_1_106136 00:08:20.199 size: 1.000366 MiB name: RG_ring_4_106136 00:08:20.199 size: 1.000366 MiB name: RG_ring_5_106136 00:08:20.199 size: 0.125366 MiB name: RG_ring_2_106136 00:08:20.199 size: 0.015991 MiB name: RG_ring_3_106136 00:08:20.199 end memzones------- 00:08:20.199 12:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:20.199 heap id: 0 total size: 820.000000 MiB number of busy elements: 227 number of free elements: 18 00:08:20.199 list of free elements. size: 18.469482 MiB 00:08:20.199 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:20.199 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:20.199 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:20.199 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:20.199 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:20.199 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:20.199 element at address: 0x200019600000 with size: 0.999329 MiB 00:08:20.199 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:20.199 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:20.199 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:20.199 element at address: 0x200019900040 with size: 0.937256 MiB 00:08:20.199 element at address: 0x200000200000 with size: 0.835083 MiB 00:08:20.199 element at address: 0x20001b000000 with size: 0.560974 MiB 00:08:20.199 element at address: 0x200019200000 with size: 0.489197 MiB 00:08:20.199 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:20.199 element at address: 0x200013800000 with size: 0.468140 MiB 00:08:20.199 element at address: 0x200028400000 with size: 0.399719 MiB 00:08:20.199 element at address: 0x200003a00000 with size: 0.356140 MiB 00:08:20.199 list of standard malloc elements. size: 199.266113 MiB 00:08:20.199 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:20.199 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:20.199 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:20.199 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:20.199 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:20.199 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:20.199 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:20.199 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:20.199 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:08:20.199 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:08:20.199 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:20.199 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:20.199 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:20.200 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b08f9c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:20.200 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:20.201 element at address: 0x200028466540 with size: 0.000244 MiB 00:08:20.201 element at address: 0x200028466640 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846d300 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:20.201 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:20.201 list of memzone associated elements. size: 602.264404 MiB 00:08:20.201 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:20.201 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:20.201 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:20.201 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:20.201 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:20.201 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_106136_0 00:08:20.201 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:20.201 associated memzone info: size: 48.002930 MiB name: MP_evtpool_106136_0 00:08:20.201 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:20.201 associated memzone info: size: 48.002930 MiB name: MP_msgpool_106136_0 00:08:20.201 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:20.201 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:20.201 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:20.201 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:20.201 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:20.201 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_106136 00:08:20.201 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:20.201 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_106136 00:08:20.201 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:20.201 associated memzone info: size: 1.007996 MiB name: MP_evtpool_106136 00:08:20.201 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:20.201 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:20.201 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:20.201 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:20.201 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:20.201 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:20.201 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:20.201 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:20.201 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:20.201 associated memzone info: size: 1.000366 MiB name: RG_ring_0_106136 00:08:20.201 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:20.201 associated memzone info: size: 1.000366 MiB name: RG_ring_1_106136 00:08:20.201 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:20.201 associated memzone info: size: 1.000366 MiB name: RG_ring_4_106136 00:08:20.201 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:20.201 associated memzone info: size: 1.000366 MiB name: RG_ring_5_106136 00:08:20.201 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:20.201 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_106136 00:08:20.201 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:20.201 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:20.201 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:20.201 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:20.201 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:20.201 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:20.201 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:20.201 associated memzone info: size: 0.125366 MiB name: RG_ring_2_106136 00:08:20.201 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:20.201 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:20.201 element at address: 0x200028466740 with size: 0.023804 MiB 00:08:20.201 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:20.201 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:20.201 associated memzone info: size: 0.015991 MiB name: RG_ring_3_106136 00:08:20.201 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:08:20.201 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:20.201 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:08:20.201 associated memzone info: size: 0.000183 MiB name: MP_msgpool_106136 00:08:20.201 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:20.201 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_106136 00:08:20.201 element at address: 0x20002846d400 with size: 0.000366 MiB 00:08:20.201 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:20.201 12:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:20.201 12:53:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 106136 00:08:20.201 12:53:38 -- common/autotest_common.sh@926 -- # '[' -z 106136 ']' 00:08:20.201 12:53:38 -- common/autotest_common.sh@930 -- # kill -0 106136 00:08:20.201 12:53:38 -- common/autotest_common.sh@931 -- # uname 00:08:20.201 12:53:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:20.201 12:53:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106136 00:08:20.201 12:53:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:20.201 12:53:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:20.201 12:53:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106136' 00:08:20.201 killing process with pid 106136 00:08:20.201 12:53:38 -- common/autotest_common.sh@945 -- # kill 106136 00:08:20.201 12:53:38 -- common/autotest_common.sh@950 -- # wait 106136 00:08:22.105 ************************************ 00:08:22.105 END TEST dpdk_mem_utility 00:08:22.105 ************************************ 00:08:22.105 00:08:22.105 real 0m3.786s 00:08:22.105 user 0m3.971s 00:08:22.105 sys 0m0.492s 00:08:22.105 12:53:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.105 12:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:22.105 12:53:40 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:22.105 12:53:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.105 12:53:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.105 12:53:40 -- common/autotest_common.sh@10 -- # set +x 00:08:22.105 ************************************ 00:08:22.105 START TEST event 00:08:22.105 ************************************ 00:08:22.105 12:53:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:22.364 * Looking for test storage... 00:08:22.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:22.364 12:53:41 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:22.364 12:53:41 -- bdev/nbd_common.sh@6 -- # set -e 00:08:22.364 12:53:41 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:22.364 12:53:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:22.364 12:53:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.364 12:53:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.364 ************************************ 00:08:22.364 START TEST event_perf 00:08:22.364 ************************************ 00:08:22.364 12:53:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:22.364 Running I/O for 1 seconds...[2024-06-11 12:53:41.069310] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:22.364 [2024-06-11 12:53:41.069654] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106260 ] 00:08:22.622 [2024-06-11 12:53:41.254560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.622 [2024-06-11 12:53:41.449967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.622 [2024-06-11 12:53:41.450082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.622 [2024-06-11 12:53:41.450195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.622 Running I/O for 1 seconds...[2024-06-11 12:53:41.450204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.996 00:08:23.996 lcore 0: 198177 00:08:23.996 lcore 1: 198175 00:08:23.996 lcore 2: 198175 00:08:23.996 lcore 3: 198176 00:08:24.255 done. 00:08:24.255 ************************************ 00:08:24.255 END TEST event_perf 00:08:24.255 ************************************ 00:08:24.255 00:08:24.255 real 0m1.844s 00:08:24.255 user 0m4.624s 00:08:24.255 sys 0m0.121s 00:08:24.255 12:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.255 12:53:42 -- common/autotest_common.sh@10 -- # set +x 00:08:24.255 12:53:42 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:24.255 12:53:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:24.255 12:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.255 12:53:42 -- common/autotest_common.sh@10 -- # set +x 00:08:24.255 ************************************ 00:08:24.255 START TEST event_reactor 00:08:24.255 ************************************ 00:08:24.255 12:53:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:24.255 [2024-06-11 12:53:42.964908] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:24.255 [2024-06-11 12:53:42.965220] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106313 ] 00:08:24.514 [2024-06-11 12:53:43.134220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.773 [2024-06-11 12:53:43.359152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.148 test_start 00:08:26.148 oneshot 00:08:26.148 tick 100 00:08:26.148 tick 100 00:08:26.148 tick 250 00:08:26.148 tick 100 00:08:26.148 tick 100 00:08:26.148 tick 100 00:08:26.148 tick 250 00:08:26.148 tick 500 00:08:26.148 tick 100 00:08:26.148 tick 100 00:08:26.148 tick 250 00:08:26.148 tick 100 00:08:26.148 tick 100 00:08:26.148 test_end 00:08:26.148 ************************************ 00:08:26.148 END TEST event_reactor 00:08:26.148 ************************************ 00:08:26.148 00:08:26.148 real 0m1.831s 00:08:26.148 user 0m1.610s 00:08:26.148 sys 0m0.121s 00:08:26.148 12:53:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.148 12:53:44 -- common/autotest_common.sh@10 -- # set +x 00:08:26.148 12:53:44 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:26.148 12:53:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:26.148 12:53:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.148 12:53:44 -- common/autotest_common.sh@10 -- # set +x 00:08:26.148 ************************************ 00:08:26.148 START TEST event_reactor_perf 00:08:26.148 ************************************ 00:08:26.148 12:53:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:26.148 [2024-06-11 12:53:44.836414] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:26.148 [2024-06-11 12:53:44.836701] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106363 ] 00:08:26.407 [2024-06-11 12:53:44.998516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.407 [2024-06-11 12:53:45.208528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.841 test_start 00:08:27.841 test_end 00:08:27.841 Performance: 301249 events per second 00:08:27.841 ************************************ 00:08:27.841 END TEST event_reactor_perf 00:08:27.841 ************************************ 00:08:27.841 00:08:27.841 real 0m1.855s 00:08:27.841 user 0m1.643s 00:08:27.841 sys 0m0.111s 00:08:27.841 12:53:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.841 12:53:46 -- common/autotest_common.sh@10 -- # set +x 00:08:28.100 12:53:46 -- event/event.sh@49 -- # uname -s 00:08:28.100 12:53:46 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:28.100 12:53:46 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:28.100 12:53:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.100 12:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.100 12:53:46 -- common/autotest_common.sh@10 -- # set +x 00:08:28.100 ************************************ 00:08:28.100 START TEST event_scheduler 00:08:28.100 ************************************ 00:08:28.100 12:53:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:28.100 * Looking for test storage... 00:08:28.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:28.100 12:53:46 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:28.100 12:53:46 -- scheduler/scheduler.sh@35 -- # scheduler_pid=106439 00:08:28.100 12:53:46 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:28.100 12:53:46 -- scheduler/scheduler.sh@37 -- # waitforlisten 106439 00:08:28.100 12:53:46 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:28.100 12:53:46 -- common/autotest_common.sh@819 -- # '[' -z 106439 ']' 00:08:28.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.100 12:53:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.100 12:53:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:28.100 12:53:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.100 12:53:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:28.100 12:53:46 -- common/autotest_common.sh@10 -- # set +x 00:08:28.100 [2024-06-11 12:53:46.875423] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:28.100 [2024-06-11 12:53:46.876124] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106439 ] 00:08:28.359 [2024-06-11 12:53:47.071090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.618 [2024-06-11 12:53:47.327031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.618 [2024-06-11 12:53:47.327140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.618 [2024-06-11 12:53:47.327263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.618 [2024-06-11 12:53:47.327266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.184 12:53:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:29.184 12:53:47 -- common/autotest_common.sh@852 -- # return 0 00:08:29.184 12:53:47 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:29.185 12:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.185 12:53:47 -- common/autotest_common.sh@10 -- # set +x 00:08:29.185 POWER: Env isn't set yet! 00:08:29.185 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:29.185 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:29.185 POWER: Cannot set governor of lcore 0 to userspace 00:08:29.185 POWER: Attempting to initialise PSTAT power management... 00:08:29.185 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:29.185 POWER: Cannot set governor of lcore 0 to performance 00:08:29.185 POWER: Attempting to initialise AMD PSTATE power management... 00:08:29.185 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:29.185 POWER: Cannot set governor of lcore 0 to userspace 00:08:29.185 POWER: Attempting to initialise CPPC power management... 00:08:29.185 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:29.185 POWER: Cannot set governor of lcore 0 to userspace 00:08:29.185 POWER: Attempting to initialise VM power management... 00:08:29.185 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:29.185 POWER: Unable to set Power Management Environment for lcore 0 00:08:29.185 [2024-06-11 12:53:47.834835] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:29.185 [2024-06-11 12:53:47.834966] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:29.185 [2024-06-11 12:53:47.835093] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:29.185 12:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.185 12:53:47 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:29.185 12:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.185 12:53:47 -- common/autotest_common.sh@10 -- # set +x 00:08:29.443 [2024-06-11 12:53:48.185272] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:29.443 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.443 12:53:48 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:29.443 12:53:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.443 12:53:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.443 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.443 ************************************ 00:08:29.444 START TEST scheduler_create_thread 00:08:29.444 ************************************ 00:08:29.444 12:53:48 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 2 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 3 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 4 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 5 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 6 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 7 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 8 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 9 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.444 10 00:08:29.444 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.444 12:53:48 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:29.444 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.444 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.702 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.702 12:53:48 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:29.702 12:53:48 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:29.702 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.702 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:29.702 12:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.702 12:53:48 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:29.702 12:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.702 12:53:48 -- common/autotest_common.sh@10 -- # set +x 00:08:30.638 12:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.638 12:53:49 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:30.638 12:53:49 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:30.638 12:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.638 12:53:49 -- common/autotest_common.sh@10 -- # set +x 00:08:31.574 ************************************ 00:08:31.574 END TEST scheduler_create_thread 00:08:31.574 ************************************ 00:08:31.574 12:53:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.574 00:08:31.574 real 0m2.148s 00:08:31.574 user 0m0.010s 00:08:31.574 sys 0m0.000s 00:08:31.574 12:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.574 12:53:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.574 12:53:50 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:31.574 12:53:50 -- scheduler/scheduler.sh@46 -- # killprocess 106439 00:08:31.574 12:53:50 -- common/autotest_common.sh@926 -- # '[' -z 106439 ']' 00:08:31.574 12:53:50 -- common/autotest_common.sh@930 -- # kill -0 106439 00:08:31.574 12:53:50 -- common/autotest_common.sh@931 -- # uname 00:08:31.574 12:53:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:31.574 12:53:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106439 00:08:31.574 killing process with pid 106439 00:08:31.574 12:53:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:31.574 12:53:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:31.574 12:53:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106439' 00:08:31.574 12:53:50 -- common/autotest_common.sh@945 -- # kill 106439 00:08:31.574 12:53:50 -- common/autotest_common.sh@950 -- # wait 106439 00:08:32.141 [2024-06-11 12:53:50.825133] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:33.516 ************************************ 00:08:33.516 END TEST event_scheduler 00:08:33.516 ************************************ 00:08:33.516 00:08:33.516 real 0m5.294s 00:08:33.516 user 0m8.637s 00:08:33.516 sys 0m0.420s 00:08:33.516 12:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.516 12:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:33.516 12:53:52 -- event/event.sh@51 -- # modprobe -n nbd 00:08:33.516 12:53:52 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:33.516 12:53:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:33.516 12:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.516 12:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:33.516 ************************************ 00:08:33.516 START TEST app_repeat 00:08:33.516 ************************************ 00:08:33.516 12:53:52 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:08:33.516 12:53:52 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.516 12:53:52 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:08:33.516 12:53:52 -- event/event.sh@13 -- # local nbd_list 00:08:33.516 12:53:52 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:08:33.516 12:53:52 -- event/event.sh@14 -- # local bdev_list 00:08:33.516 12:53:52 -- event/event.sh@15 -- # local repeat_times=4 00:08:33.516 12:53:52 -- event/event.sh@17 -- # modprobe nbd 00:08:33.516 12:53:52 -- event/event.sh@19 -- # repeat_pid=106583 00:08:33.516 12:53:52 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:33.516 12:53:52 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:33.516 12:53:52 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 106583' 00:08:33.516 Process app_repeat pid: 106583 00:08:33.516 12:53:52 -- event/event.sh@23 -- # for i in {0..2} 00:08:33.516 12:53:52 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:33.516 spdk_app_start Round 0 00:08:33.516 12:53:52 -- event/event.sh@25 -- # waitforlisten 106583 /var/tmp/spdk-nbd.sock 00:08:33.516 12:53:52 -- common/autotest_common.sh@819 -- # '[' -z 106583 ']' 00:08:33.516 12:53:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:33.516 12:53:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:33.516 12:53:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:33.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:33.516 12:53:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:33.516 12:53:52 -- common/autotest_common.sh@10 -- # set +x 00:08:33.516 [2024-06-11 12:53:52.118514] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:33.516 [2024-06-11 12:53:52.119634] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106583 ] 00:08:33.516 [2024-06-11 12:53:52.290001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:33.774 [2024-06-11 12:53:52.479119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.774 [2024-06-11 12:53:52.479116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.340 12:53:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:34.340 12:53:53 -- common/autotest_common.sh@852 -- # return 0 00:08:34.340 12:53:53 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.598 Malloc0 00:08:34.598 12:53:53 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.856 Malloc1 00:08:34.856 12:53:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@12 -- # local i 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:34.856 12:53:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:35.115 /dev/nbd0 00:08:35.115 12:53:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:35.115 12:53:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:35.115 12:53:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:35.115 12:53:53 -- common/autotest_common.sh@857 -- # local i 00:08:35.115 12:53:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:35.115 12:53:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:35.115 12:53:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:35.115 12:53:53 -- common/autotest_common.sh@861 -- # break 00:08:35.115 12:53:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:35.115 12:53:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:35.115 12:53:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.115 1+0 records in 00:08:35.115 1+0 records out 00:08:35.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660544 s, 6.2 MB/s 00:08:35.115 12:53:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:35.115 12:53:53 -- common/autotest_common.sh@874 -- # size=4096 00:08:35.115 12:53:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:35.115 12:53:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:35.115 12:53:53 -- common/autotest_common.sh@877 -- # return 0 00:08:35.115 12:53:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.115 12:53:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.115 12:53:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:35.374 /dev/nbd1 00:08:35.644 12:53:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:35.644 12:53:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:35.644 12:53:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:35.644 12:53:54 -- common/autotest_common.sh@857 -- # local i 00:08:35.644 12:53:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:35.644 12:53:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:35.644 12:53:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:35.644 12:53:54 -- common/autotest_common.sh@861 -- # break 00:08:35.644 12:53:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:35.644 12:53:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:35.644 12:53:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.644 1+0 records in 00:08:35.644 1+0 records out 00:08:35.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684273 s, 6.0 MB/s 00:08:35.644 12:53:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:35.644 12:53:54 -- common/autotest_common.sh@874 -- # size=4096 00:08:35.644 12:53:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:35.644 12:53:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:35.644 12:53:54 -- common/autotest_common.sh@877 -- # return 0 00:08:35.644 12:53:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.644 12:53:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.644 12:53:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.644 12:53:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.644 12:53:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:35.931 { 00:08:35.931 "nbd_device": "/dev/nbd0", 00:08:35.931 "bdev_name": "Malloc0" 00:08:35.931 }, 00:08:35.931 { 00:08:35.931 "nbd_device": "/dev/nbd1", 00:08:35.931 "bdev_name": "Malloc1" 00:08:35.931 } 00:08:35.931 ]' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:35.931 { 00:08:35.931 "nbd_device": "/dev/nbd0", 00:08:35.931 "bdev_name": "Malloc0" 00:08:35.931 }, 00:08:35.931 { 00:08:35.931 "nbd_device": "/dev/nbd1", 00:08:35.931 "bdev_name": "Malloc1" 00:08:35.931 } 00:08:35.931 ]' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:35.931 /dev/nbd1' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:35.931 /dev/nbd1' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@65 -- # count=2 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@95 -- # count=2 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:35.931 256+0 records in 00:08:35.931 256+0 records out 00:08:35.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105876 s, 99.0 MB/s 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:35.931 256+0 records in 00:08:35.931 256+0 records out 00:08:35.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259568 s, 40.4 MB/s 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:35.931 256+0 records in 00:08:35.931 256+0 records out 00:08:35.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317227 s, 33.1 MB/s 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@51 -- # local i 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.931 12:53:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@41 -- # break 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.190 12:53:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.448 12:53:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.448 12:53:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.448 12:53:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.448 12:53:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.448 12:53:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.448 12:53:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.448 12:53:55 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:36.706 12:53:55 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:36.706 12:53:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.706 12:53:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.706 12:53:55 -- bdev/nbd_common.sh@41 -- # break 00:08:36.706 12:53:55 -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.706 12:53:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.706 12:53:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.706 12:53:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@65 -- # true 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@104 -- # count=0 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:36.964 12:53:55 -- bdev/nbd_common.sh@109 -- # return 0 00:08:36.964 12:53:55 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:37.223 12:53:56 -- event/event.sh@35 -- # sleep 3 00:08:38.599 [2024-06-11 12:53:57.087940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:38.599 [2024-06-11 12:53:57.253331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.599 [2024-06-11 12:53:57.253331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.599 [2024-06-11 12:53:57.426130] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:38.599 [2024-06-11 12:53:57.426249] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:40.500 spdk_app_start Round 1 00:08:40.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.500 12:53:59 -- event/event.sh@23 -- # for i in {0..2} 00:08:40.500 12:53:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:40.500 12:53:59 -- event/event.sh@25 -- # waitforlisten 106583 /var/tmp/spdk-nbd.sock 00:08:40.500 12:53:59 -- common/autotest_common.sh@819 -- # '[' -z 106583 ']' 00:08:40.500 12:53:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.500 12:53:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:40.500 12:53:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.500 12:53:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:40.500 12:53:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.500 12:53:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:40.500 12:53:59 -- common/autotest_common.sh@852 -- # return 0 00:08:40.500 12:53:59 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.758 Malloc0 00:08:41.016 12:53:59 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:41.274 Malloc1 00:08:41.274 12:53:59 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@12 -- # local i 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.274 12:53:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:41.533 /dev/nbd0 00:08:41.533 12:54:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:41.533 12:54:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:41.533 12:54:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:41.533 12:54:00 -- common/autotest_common.sh@857 -- # local i 00:08:41.533 12:54:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:41.533 12:54:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:41.533 12:54:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:41.533 12:54:00 -- common/autotest_common.sh@861 -- # break 00:08:41.533 12:54:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:41.533 12:54:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:41.533 12:54:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.533 1+0 records in 00:08:41.533 1+0 records out 00:08:41.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305119 s, 13.4 MB/s 00:08:41.533 12:54:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.533 12:54:00 -- common/autotest_common.sh@874 -- # size=4096 00:08:41.533 12:54:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.533 12:54:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:41.533 12:54:00 -- common/autotest_common.sh@877 -- # return 0 00:08:41.533 12:54:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.533 12:54:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.533 12:54:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:41.792 /dev/nbd1 00:08:41.792 12:54:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:41.792 12:54:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:41.792 12:54:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:41.792 12:54:00 -- common/autotest_common.sh@857 -- # local i 00:08:41.792 12:54:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:41.792 12:54:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:41.792 12:54:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:41.792 12:54:00 -- common/autotest_common.sh@861 -- # break 00:08:41.792 12:54:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:41.792 12:54:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:41.792 12:54:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.792 1+0 records in 00:08:41.792 1+0 records out 00:08:41.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369005 s, 11.1 MB/s 00:08:41.792 12:54:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.792 12:54:00 -- common/autotest_common.sh@874 -- # size=4096 00:08:41.792 12:54:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.792 12:54:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:41.792 12:54:00 -- common/autotest_common.sh@877 -- # return 0 00:08:41.792 12:54:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.792 12:54:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.792 12:54:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.792 12:54:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.792 12:54:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:42.051 { 00:08:42.051 "nbd_device": "/dev/nbd0", 00:08:42.051 "bdev_name": "Malloc0" 00:08:42.051 }, 00:08:42.051 { 00:08:42.051 "nbd_device": "/dev/nbd1", 00:08:42.051 "bdev_name": "Malloc1" 00:08:42.051 } 00:08:42.051 ]' 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:42.051 { 00:08:42.051 "nbd_device": "/dev/nbd0", 00:08:42.051 "bdev_name": "Malloc0" 00:08:42.051 }, 00:08:42.051 { 00:08:42.051 "nbd_device": "/dev/nbd1", 00:08:42.051 "bdev_name": "Malloc1" 00:08:42.051 } 00:08:42.051 ]' 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:42.051 /dev/nbd1' 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:42.051 /dev/nbd1' 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@65 -- # count=2 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@95 -- # count=2 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:42.051 12:54:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:42.309 256+0 records in 00:08:42.309 256+0 records out 00:08:42.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103544 s, 101 MB/s 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:42.309 256+0 records in 00:08:42.309 256+0 records out 00:08:42.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263322 s, 39.8 MB/s 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:42.309 256+0 records in 00:08:42.309 256+0 records out 00:08:42.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0346392 s, 30.3 MB/s 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@51 -- # local i 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.309 12:54:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@41 -- # break 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.571 12:54:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@41 -- # break 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.830 12:54:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@65 -- # true 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@65 -- # count=0 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@104 -- # count=0 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:43.088 12:54:01 -- bdev/nbd_common.sh@109 -- # return 0 00:08:43.088 12:54:01 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:43.656 12:54:02 -- event/event.sh@35 -- # sleep 3 00:08:44.632 [2024-06-11 12:54:03.331006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.890 [2024-06-11 12:54:03.491313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.890 [2024-06-11 12:54:03.491320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.890 [2024-06-11 12:54:03.653434] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:44.890 [2024-06-11 12:54:03.653536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:46.793 12:54:05 -- event/event.sh@23 -- # for i in {0..2} 00:08:46.793 12:54:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:46.793 spdk_app_start Round 2 00:08:46.793 12:54:05 -- event/event.sh@25 -- # waitforlisten 106583 /var/tmp/spdk-nbd.sock 00:08:46.793 12:54:05 -- common/autotest_common.sh@819 -- # '[' -z 106583 ']' 00:08:46.793 12:54:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:46.793 12:54:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:46.793 12:54:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:46.793 12:54:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.793 12:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:46.793 12:54:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.793 12:54:05 -- common/autotest_common.sh@852 -- # return 0 00:08:46.793 12:54:05 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.052 Malloc0 00:08:47.052 12:54:05 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.620 Malloc1 00:08:47.620 12:54:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@12 -- # local i 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:47.620 /dev/nbd0 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:47.620 12:54:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:47.620 12:54:06 -- common/autotest_common.sh@857 -- # local i 00:08:47.620 12:54:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:47.620 12:54:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:47.620 12:54:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:47.620 12:54:06 -- common/autotest_common.sh@861 -- # break 00:08:47.620 12:54:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:47.620 12:54:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:47.620 12:54:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.620 1+0 records in 00:08:47.620 1+0 records out 00:08:47.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526949 s, 7.8 MB/s 00:08:47.620 12:54:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.620 12:54:06 -- common/autotest_common.sh@874 -- # size=4096 00:08:47.620 12:54:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.620 12:54:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:47.620 12:54:06 -- common/autotest_common.sh@877 -- # return 0 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.620 12:54:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:47.880 /dev/nbd1 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:48.139 12:54:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:48.139 12:54:06 -- common/autotest_common.sh@857 -- # local i 00:08:48.139 12:54:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:48.139 12:54:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:48.139 12:54:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:48.139 12:54:06 -- common/autotest_common.sh@861 -- # break 00:08:48.139 12:54:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:48.139 12:54:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:48.139 12:54:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:48.139 1+0 records in 00:08:48.139 1+0 records out 00:08:48.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365157 s, 11.2 MB/s 00:08:48.139 12:54:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:48.139 12:54:06 -- common/autotest_common.sh@874 -- # size=4096 00:08:48.139 12:54:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:48.139 12:54:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:48.139 12:54:06 -- common/autotest_common.sh@877 -- # return 0 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.139 { 00:08:48.139 "nbd_device": "/dev/nbd0", 00:08:48.139 "bdev_name": "Malloc0" 00:08:48.139 }, 00:08:48.139 { 00:08:48.139 "nbd_device": "/dev/nbd1", 00:08:48.139 "bdev_name": "Malloc1" 00:08:48.139 } 00:08:48.139 ]' 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.139 12:54:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.139 { 00:08:48.139 "nbd_device": "/dev/nbd0", 00:08:48.139 "bdev_name": "Malloc0" 00:08:48.139 }, 00:08:48.139 { 00:08:48.139 "nbd_device": "/dev/nbd1", 00:08:48.139 "bdev_name": "Malloc1" 00:08:48.139 } 00:08:48.139 ]' 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.398 /dev/nbd1' 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.398 /dev/nbd1' 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@65 -- # count=2 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@95 -- # count=2 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:48.398 256+0 records in 00:08:48.398 256+0 records out 00:08:48.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104529 s, 100 MB/s 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:48.398 256+0 records in 00:08:48.398 256+0 records out 00:08:48.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263064 s, 39.9 MB/s 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.398 12:54:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:48.398 256+0 records in 00:08:48.398 256+0 records out 00:08:48.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315773 s, 33.2 MB/s 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@51 -- # local i 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.399 12:54:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@41 -- # break 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.658 12:54:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:48.917 12:54:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:48.917 12:54:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:48.917 12:54:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:48.917 12:54:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.917 12:54:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.917 12:54:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:48.917 12:54:07 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:49.176 12:54:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:49.176 12:54:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.176 12:54:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:49.176 12:54:07 -- bdev/nbd_common.sh@41 -- # break 00:08:49.176 12:54:07 -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.176 12:54:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.176 12:54:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.176 12:54:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@65 -- # true 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@65 -- # count=0 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@104 -- # count=0 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:49.435 12:54:08 -- bdev/nbd_common.sh@109 -- # return 0 00:08:49.435 12:54:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:49.694 12:54:08 -- event/event.sh@35 -- # sleep 3 00:08:51.074 [2024-06-11 12:54:09.511958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.074 [2024-06-11 12:54:09.667388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.074 [2024-06-11 12:54:09.667398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.074 [2024-06-11 12:54:09.828931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:51.074 [2024-06-11 12:54:09.829057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:52.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:52.976 12:54:11 -- event/event.sh@38 -- # waitforlisten 106583 /var/tmp/spdk-nbd.sock 00:08:52.976 12:54:11 -- common/autotest_common.sh@819 -- # '[' -z 106583 ']' 00:08:52.976 12:54:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:52.976 12:54:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:52.976 12:54:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:52.976 12:54:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:52.976 12:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:52.976 12:54:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:52.976 12:54:11 -- common/autotest_common.sh@852 -- # return 0 00:08:52.976 12:54:11 -- event/event.sh@39 -- # killprocess 106583 00:08:52.976 12:54:11 -- common/autotest_common.sh@926 -- # '[' -z 106583 ']' 00:08:52.976 12:54:11 -- common/autotest_common.sh@930 -- # kill -0 106583 00:08:52.976 12:54:11 -- common/autotest_common.sh@931 -- # uname 00:08:52.976 12:54:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:52.976 12:54:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106583 00:08:52.976 killing process with pid 106583 00:08:52.976 12:54:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:52.976 12:54:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:52.976 12:54:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106583' 00:08:52.976 12:54:11 -- common/autotest_common.sh@945 -- # kill 106583 00:08:52.976 12:54:11 -- common/autotest_common.sh@950 -- # wait 106583 00:08:53.911 spdk_app_start is called in Round 0. 00:08:53.911 Shutdown signal received, stop current app iteration 00:08:53.911 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:53.911 spdk_app_start is called in Round 1. 00:08:53.911 Shutdown signal received, stop current app iteration 00:08:53.911 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:53.911 spdk_app_start is called in Round 2. 00:08:53.911 Shutdown signal received, stop current app iteration 00:08:53.911 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:53.911 spdk_app_start is called in Round 3. 00:08:53.911 Shutdown signal received, stop current app iteration 00:08:53.911 ************************************ 00:08:53.911 END TEST app_repeat 00:08:53.911 ************************************ 00:08:53.911 12:54:12 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:53.911 12:54:12 -- event/event.sh@42 -- # return 0 00:08:53.911 00:08:53.911 real 0m20.618s 00:08:53.911 user 0m44.469s 00:08:53.911 sys 0m2.721s 00:08:53.911 12:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.911 12:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:53.911 12:54:12 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:53.911 12:54:12 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:53.911 12:54:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:53.911 12:54:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.911 12:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:53.911 ************************************ 00:08:53.911 START TEST cpu_locks 00:08:53.911 ************************************ 00:08:53.911 12:54:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:54.169 * Looking for test storage... 00:08:54.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:54.169 12:54:12 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:54.169 12:54:12 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:54.169 12:54:12 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:54.169 12:54:12 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:54.169 12:54:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:54.169 12:54:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.169 12:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:54.169 ************************************ 00:08:54.169 START TEST default_locks 00:08:54.169 ************************************ 00:08:54.169 12:54:12 -- common/autotest_common.sh@1104 -- # default_locks 00:08:54.169 12:54:12 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107152 00:08:54.169 12:54:12 -- event/cpu_locks.sh@47 -- # waitforlisten 107152 00:08:54.169 12:54:12 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:54.169 12:54:12 -- common/autotest_common.sh@819 -- # '[' -z 107152 ']' 00:08:54.169 12:54:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.169 12:54:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:54.169 12:54:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.169 12:54:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:54.169 12:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:54.170 [2024-06-11 12:54:12.894235] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:54.170 [2024-06-11 12:54:12.894416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107152 ] 00:08:54.428 [2024-06-11 12:54:13.062035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.428 [2024-06-11 12:54:13.242380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:54.428 [2024-06-11 12:54:13.242607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.831 12:54:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:55.831 12:54:14 -- common/autotest_common.sh@852 -- # return 0 00:08:55.831 12:54:14 -- event/cpu_locks.sh@49 -- # locks_exist 107152 00:08:55.831 12:54:14 -- event/cpu_locks.sh@22 -- # lslocks -p 107152 00:08:55.831 12:54:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:56.089 12:54:14 -- event/cpu_locks.sh@50 -- # killprocess 107152 00:08:56.089 12:54:14 -- common/autotest_common.sh@926 -- # '[' -z 107152 ']' 00:08:56.089 12:54:14 -- common/autotest_common.sh@930 -- # kill -0 107152 00:08:56.089 12:54:14 -- common/autotest_common.sh@931 -- # uname 00:08:56.089 12:54:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:56.089 12:54:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107152 00:08:56.089 12:54:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:56.089 12:54:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:56.089 killing process with pid 107152 00:08:56.089 12:54:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107152' 00:08:56.089 12:54:14 -- common/autotest_common.sh@945 -- # kill 107152 00:08:56.089 12:54:14 -- common/autotest_common.sh@950 -- # wait 107152 00:08:57.991 12:54:16 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107152 00:08:57.991 12:54:16 -- common/autotest_common.sh@640 -- # local es=0 00:08:57.991 12:54:16 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107152 00:08:57.991 12:54:16 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:57.991 12:54:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:57.991 12:54:16 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:57.991 12:54:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:57.991 12:54:16 -- common/autotest_common.sh@643 -- # waitforlisten 107152 00:08:57.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.991 12:54:16 -- common/autotest_common.sh@819 -- # '[' -z 107152 ']' 00:08:57.991 12:54:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.991 12:54:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.991 12:54:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.991 12:54:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.991 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.991 ERROR: process (pid: 107152) is no longer running 00:08:57.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107152) - No such process 00:08:57.991 12:54:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:57.991 12:54:16 -- common/autotest_common.sh@852 -- # return 1 00:08:57.991 12:54:16 -- common/autotest_common.sh@643 -- # es=1 00:08:57.991 12:54:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:57.991 12:54:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:57.991 12:54:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:57.991 12:54:16 -- event/cpu_locks.sh@54 -- # no_locks 00:08:57.991 12:54:16 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:08:57.992 12:54:16 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:57.992 12:54:16 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:57.992 00:08:57.992 real 0m3.876s 00:08:57.992 user 0m4.097s 00:08:57.992 sys 0m0.665s 00:08:57.992 12:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.992 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.992 ************************************ 00:08:57.992 END TEST default_locks 00:08:57.992 ************************************ 00:08:57.992 12:54:16 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:57.992 12:54:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:57.992 12:54:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.992 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.992 ************************************ 00:08:57.992 START TEST default_locks_via_rpc 00:08:57.992 ************************************ 00:08:57.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.992 12:54:16 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:08:57.992 12:54:16 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=107234 00:08:57.992 12:54:16 -- event/cpu_locks.sh@63 -- # waitforlisten 107234 00:08:57.992 12:54:16 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:57.992 12:54:16 -- common/autotest_common.sh@819 -- # '[' -z 107234 ']' 00:08:57.992 12:54:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.992 12:54:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.992 12:54:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.992 12:54:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.992 12:54:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.992 [2024-06-11 12:54:16.819758] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:57.992 [2024-06-11 12:54:16.820195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107234 ] 00:08:58.250 [2024-06-11 12:54:16.989359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.509 [2024-06-11 12:54:17.180306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:58.509 [2024-06-11 12:54:17.180768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.885 12:54:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:59.885 12:54:18 -- common/autotest_common.sh@852 -- # return 0 00:08:59.885 12:54:18 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:59.885 12:54:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.885 12:54:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.885 12:54:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.885 12:54:18 -- event/cpu_locks.sh@67 -- # no_locks 00:08:59.885 12:54:18 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:08:59.885 12:54:18 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:59.885 12:54:18 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:59.885 12:54:18 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:59.885 12:54:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.885 12:54:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.885 12:54:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.885 12:54:18 -- event/cpu_locks.sh@71 -- # locks_exist 107234 00:08:59.885 12:54:18 -- event/cpu_locks.sh@22 -- # lslocks -p 107234 00:08:59.885 12:54:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:59.885 12:54:18 -- event/cpu_locks.sh@73 -- # killprocess 107234 00:08:59.885 12:54:18 -- common/autotest_common.sh@926 -- # '[' -z 107234 ']' 00:08:59.885 12:54:18 -- common/autotest_common.sh@930 -- # kill -0 107234 00:08:59.885 12:54:18 -- common/autotest_common.sh@931 -- # uname 00:08:59.885 12:54:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:59.885 12:54:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107234 00:08:59.885 12:54:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:59.885 12:54:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:59.885 12:54:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107234' 00:08:59.885 killing process with pid 107234 00:08:59.885 12:54:18 -- common/autotest_common.sh@945 -- # kill 107234 00:08:59.885 12:54:18 -- common/autotest_common.sh@950 -- # wait 107234 00:09:01.815 ************************************ 00:09:01.815 END TEST default_locks_via_rpc 00:09:01.815 ************************************ 00:09:01.815 00:09:01.815 real 0m3.789s 00:09:01.815 user 0m3.941s 00:09:01.815 sys 0m0.615s 00:09:01.815 12:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.815 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:09:01.815 12:54:20 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:01.815 12:54:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:01.815 12:54:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.815 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:09:01.815 ************************************ 00:09:01.815 START TEST non_locking_app_on_locked_coremask 00:09:01.815 ************************************ 00:09:01.815 12:54:20 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:01.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.815 12:54:20 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=107338 00:09:01.815 12:54:20 -- event/cpu_locks.sh@81 -- # waitforlisten 107338 /var/tmp/spdk.sock 00:09:01.815 12:54:20 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:01.815 12:54:20 -- common/autotest_common.sh@819 -- # '[' -z 107338 ']' 00:09:01.815 12:54:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.815 12:54:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:01.815 12:54:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.816 12:54:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:01.816 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:09:02.074 [2024-06-11 12:54:20.660223] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:02.074 [2024-06-11 12:54:20.660673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107338 ] 00:09:02.074 [2024-06-11 12:54:20.829893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.332 [2024-06-11 12:54:21.065906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.332 [2024-06-11 12:54:21.066351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:03.708 12:54:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:03.708 12:54:22 -- common/autotest_common.sh@852 -- # return 0 00:09:03.708 12:54:22 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=107368 00:09:03.708 12:54:22 -- event/cpu_locks.sh@85 -- # waitforlisten 107368 /var/tmp/spdk2.sock 00:09:03.708 12:54:22 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:03.708 12:54:22 -- common/autotest_common.sh@819 -- # '[' -z 107368 ']' 00:09:03.708 12:54:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:03.708 12:54:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:03.708 12:54:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:03.708 12:54:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:03.708 12:54:22 -- common/autotest_common.sh@10 -- # set +x 00:09:03.708 [2024-06-11 12:54:22.286971] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:03.708 [2024-06-11 12:54:22.287682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107368 ] 00:09:03.708 [2024-06-11 12:54:22.457080] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:03.708 [2024-06-11 12:54:22.457136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.274 [2024-06-11 12:54:22.814466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.274 [2024-06-11 12:54:22.814706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.175 12:54:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:06.175 12:54:24 -- common/autotest_common.sh@852 -- # return 0 00:09:06.175 12:54:24 -- event/cpu_locks.sh@87 -- # locks_exist 107338 00:09:06.175 12:54:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:06.175 12:54:24 -- event/cpu_locks.sh@22 -- # lslocks -p 107338 00:09:06.175 12:54:25 -- event/cpu_locks.sh@89 -- # killprocess 107338 00:09:06.175 12:54:25 -- common/autotest_common.sh@926 -- # '[' -z 107338 ']' 00:09:06.175 12:54:25 -- common/autotest_common.sh@930 -- # kill -0 107338 00:09:06.175 12:54:25 -- common/autotest_common.sh@931 -- # uname 00:09:06.175 12:54:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:06.175 12:54:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107338 00:09:06.433 killing process with pid 107338 00:09:06.433 12:54:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:06.433 12:54:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:06.433 12:54:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107338' 00:09:06.434 12:54:25 -- common/autotest_common.sh@945 -- # kill 107338 00:09:06.434 12:54:25 -- common/autotest_common.sh@950 -- # wait 107338 00:09:10.620 12:54:28 -- event/cpu_locks.sh@90 -- # killprocess 107368 00:09:10.620 12:54:28 -- common/autotest_common.sh@926 -- # '[' -z 107368 ']' 00:09:10.620 12:54:28 -- common/autotest_common.sh@930 -- # kill -0 107368 00:09:10.620 12:54:28 -- common/autotest_common.sh@931 -- # uname 00:09:10.620 12:54:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:10.620 12:54:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107368 00:09:10.620 killing process with pid 107368 00:09:10.620 12:54:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:10.620 12:54:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:10.620 12:54:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107368' 00:09:10.620 12:54:28 -- common/autotest_common.sh@945 -- # kill 107368 00:09:10.620 12:54:28 -- common/autotest_common.sh@950 -- # wait 107368 00:09:11.997 ************************************ 00:09:11.997 END TEST non_locking_app_on_locked_coremask 00:09:11.997 ************************************ 00:09:11.997 00:09:11.997 real 0m10.161s 00:09:11.997 user 0m10.863s 00:09:11.997 sys 0m1.264s 00:09:11.997 12:54:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.997 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:09:11.997 12:54:30 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:11.997 12:54:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:11.997 12:54:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.997 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:09:11.997 ************************************ 00:09:11.997 START TEST locking_app_on_unlocked_coremask 00:09:11.997 ************************************ 00:09:11.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.997 12:54:30 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:11.997 12:54:30 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=107525 00:09:11.997 12:54:30 -- event/cpu_locks.sh@99 -- # waitforlisten 107525 /var/tmp/spdk.sock 00:09:11.997 12:54:30 -- common/autotest_common.sh@819 -- # '[' -z 107525 ']' 00:09:11.997 12:54:30 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:11.997 12:54:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.997 12:54:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:11.997 12:54:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.997 12:54:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:11.997 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.257 [2024-06-11 12:54:30.874531] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:12.257 [2024-06-11 12:54:30.874943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107525 ] 00:09:12.257 [2024-06-11 12:54:31.043970] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:12.257 [2024-06-11 12:54:31.044419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.516 [2024-06-11 12:54:31.286965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:12.516 [2024-06-11 12:54:31.287724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:13.898 12:54:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.898 12:54:32 -- common/autotest_common.sh@852 -- # return 0 00:09:13.898 12:54:32 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=107560 00:09:13.898 12:54:32 -- event/cpu_locks.sh@103 -- # waitforlisten 107560 /var/tmp/spdk2.sock 00:09:13.898 12:54:32 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:13.898 12:54:32 -- common/autotest_common.sh@819 -- # '[' -z 107560 ']' 00:09:13.898 12:54:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:13.898 12:54:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:13.898 12:54:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:13.898 12:54:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:13.898 12:54:32 -- common/autotest_common.sh@10 -- # set +x 00:09:13.898 [2024-06-11 12:54:32.593724] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:13.898 [2024-06-11 12:54:32.595155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107560 ] 00:09:14.157 [2024-06-11 12:54:32.757497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.415 [2024-06-11 12:54:33.136465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.415 [2024-06-11 12:54:33.136680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.320 12:54:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:16.320 12:54:34 -- common/autotest_common.sh@852 -- # return 0 00:09:16.320 12:54:34 -- event/cpu_locks.sh@105 -- # locks_exist 107560 00:09:16.320 12:54:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:16.320 12:54:34 -- event/cpu_locks.sh@22 -- # lslocks -p 107560 00:09:16.579 12:54:35 -- event/cpu_locks.sh@107 -- # killprocess 107525 00:09:16.580 12:54:35 -- common/autotest_common.sh@926 -- # '[' -z 107525 ']' 00:09:16.580 12:54:35 -- common/autotest_common.sh@930 -- # kill -0 107525 00:09:16.580 12:54:35 -- common/autotest_common.sh@931 -- # uname 00:09:16.580 12:54:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:16.580 12:54:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107525 00:09:16.580 12:54:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:16.580 12:54:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:16.580 12:54:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107525' 00:09:16.580 killing process with pid 107525 00:09:16.580 12:54:35 -- common/autotest_common.sh@945 -- # kill 107525 00:09:16.580 12:54:35 -- common/autotest_common.sh@950 -- # wait 107525 00:09:20.814 12:54:39 -- event/cpu_locks.sh@108 -- # killprocess 107560 00:09:20.814 12:54:39 -- common/autotest_common.sh@926 -- # '[' -z 107560 ']' 00:09:20.814 12:54:39 -- common/autotest_common.sh@930 -- # kill -0 107560 00:09:20.814 12:54:39 -- common/autotest_common.sh@931 -- # uname 00:09:20.814 12:54:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:20.814 12:54:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107560 00:09:20.814 killing process with pid 107560 00:09:20.814 12:54:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:20.814 12:54:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:20.814 12:54:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107560' 00:09:20.814 12:54:39 -- common/autotest_common.sh@945 -- # kill 107560 00:09:20.814 12:54:39 -- common/autotest_common.sh@950 -- # wait 107560 00:09:22.719 00:09:22.719 real 0m10.570s 00:09:22.719 user 0m11.239s 00:09:22.719 sys 0m1.339s 00:09:22.719 ************************************ 00:09:22.719 END TEST locking_app_on_unlocked_coremask 00:09:22.719 ************************************ 00:09:22.719 12:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.719 12:54:41 -- common/autotest_common.sh@10 -- # set +x 00:09:22.719 12:54:41 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:22.719 12:54:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:22.719 12:54:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.719 12:54:41 -- common/autotest_common.sh@10 -- # set +x 00:09:22.719 ************************************ 00:09:22.719 START TEST locking_app_on_locked_coremask 00:09:22.719 ************************************ 00:09:22.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.719 12:54:41 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:09:22.719 12:54:41 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=107720 00:09:22.719 12:54:41 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.719 12:54:41 -- event/cpu_locks.sh@116 -- # waitforlisten 107720 /var/tmp/spdk.sock 00:09:22.719 12:54:41 -- common/autotest_common.sh@819 -- # '[' -z 107720 ']' 00:09:22.719 12:54:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.719 12:54:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.719 12:54:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.719 12:54:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.719 12:54:41 -- common/autotest_common.sh@10 -- # set +x 00:09:22.719 [2024-06-11 12:54:41.503245] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:22.719 [2024-06-11 12:54:41.503705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107720 ] 00:09:22.978 [2024-06-11 12:54:41.673998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.237 [2024-06-11 12:54:41.877529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:23.237 [2024-06-11 12:54:41.877910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.611 12:54:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:24.611 12:54:43 -- common/autotest_common.sh@852 -- # return 0 00:09:24.611 12:54:43 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=107748 00:09:24.611 12:54:43 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 107748 /var/tmp/spdk2.sock 00:09:24.611 12:54:43 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:24.611 12:54:43 -- common/autotest_common.sh@640 -- # local es=0 00:09:24.611 12:54:43 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107748 /var/tmp/spdk2.sock 00:09:24.611 12:54:43 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:24.611 12:54:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:24.611 12:54:43 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:24.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:24.611 12:54:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:24.611 12:54:43 -- common/autotest_common.sh@643 -- # waitforlisten 107748 /var/tmp/spdk2.sock 00:09:24.611 12:54:43 -- common/autotest_common.sh@819 -- # '[' -z 107748 ']' 00:09:24.611 12:54:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:24.611 12:54:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:24.611 12:54:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:24.611 12:54:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:24.612 12:54:43 -- common/autotest_common.sh@10 -- # set +x 00:09:24.612 [2024-06-11 12:54:43.234124] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:24.612 [2024-06-11 12:54:43.235084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107748 ] 00:09:24.612 [2024-06-11 12:54:43.394346] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 107720 has claimed it. 00:09:24.612 [2024-06-11 12:54:43.394449] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:25.179 ERROR: process (pid: 107748) is no longer running 00:09:25.179 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107748) - No such process 00:09:25.179 12:54:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:25.179 12:54:43 -- common/autotest_common.sh@852 -- # return 1 00:09:25.179 12:54:43 -- common/autotest_common.sh@643 -- # es=1 00:09:25.179 12:54:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:25.179 12:54:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:25.179 12:54:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:25.179 12:54:43 -- event/cpu_locks.sh@122 -- # locks_exist 107720 00:09:25.179 12:54:43 -- event/cpu_locks.sh@22 -- # lslocks -p 107720 00:09:25.179 12:54:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:25.438 12:54:44 -- event/cpu_locks.sh@124 -- # killprocess 107720 00:09:25.438 12:54:44 -- common/autotest_common.sh@926 -- # '[' -z 107720 ']' 00:09:25.438 12:54:44 -- common/autotest_common.sh@930 -- # kill -0 107720 00:09:25.438 12:54:44 -- common/autotest_common.sh@931 -- # uname 00:09:25.438 12:54:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:25.438 12:54:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107720 00:09:25.438 killing process with pid 107720 00:09:25.438 12:54:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:25.438 12:54:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:25.438 12:54:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107720' 00:09:25.438 12:54:44 -- common/autotest_common.sh@945 -- # kill 107720 00:09:25.438 12:54:44 -- common/autotest_common.sh@950 -- # wait 107720 00:09:27.342 00:09:27.342 real 0m4.727s 00:09:27.342 user 0m5.195s 00:09:27.342 sys 0m0.706s 00:09:27.342 12:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.342 ************************************ 00:09:27.342 END TEST locking_app_on_locked_coremask 00:09:27.342 ************************************ 00:09:27.342 12:54:46 -- common/autotest_common.sh@10 -- # set +x 00:09:27.601 12:54:46 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:27.601 12:54:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:27.601 12:54:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.601 12:54:46 -- common/autotest_common.sh@10 -- # set +x 00:09:27.601 ************************************ 00:09:27.601 START TEST locking_overlapped_coremask 00:09:27.601 ************************************ 00:09:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.601 12:54:46 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:09:27.601 12:54:46 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=107818 00:09:27.601 12:54:46 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:27.601 12:54:46 -- event/cpu_locks.sh@133 -- # waitforlisten 107818 /var/tmp/spdk.sock 00:09:27.601 12:54:46 -- common/autotest_common.sh@819 -- # '[' -z 107818 ']' 00:09:27.601 12:54:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.601 12:54:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:27.601 12:54:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.601 12:54:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:27.601 12:54:46 -- common/autotest_common.sh@10 -- # set +x 00:09:27.601 [2024-06-11 12:54:46.274919] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:27.601 [2024-06-11 12:54:46.275266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107818 ] 00:09:27.859 [2024-06-11 12:54:46.442412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.859 [2024-06-11 12:54:46.645924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:27.859 [2024-06-11 12:54:46.646557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.859 [2024-06-11 12:54:46.646706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.859 [2024-06-11 12:54:46.646701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.246 12:54:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:29.246 12:54:47 -- common/autotest_common.sh@852 -- # return 0 00:09:29.246 12:54:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=107849 00:09:29.246 12:54:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 107849 /var/tmp/spdk2.sock 00:09:29.246 12:54:47 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:29.246 12:54:47 -- common/autotest_common.sh@640 -- # local es=0 00:09:29.246 12:54:47 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107849 /var/tmp/spdk2.sock 00:09:29.246 12:54:47 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:29.246 12:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:29.246 12:54:47 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:29.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:29.246 12:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:29.246 12:54:47 -- common/autotest_common.sh@643 -- # waitforlisten 107849 /var/tmp/spdk2.sock 00:09:29.246 12:54:47 -- common/autotest_common.sh@819 -- # '[' -z 107849 ']' 00:09:29.246 12:54:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:29.246 12:54:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:29.246 12:54:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:29.246 12:54:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:29.246 12:54:47 -- common/autotest_common.sh@10 -- # set +x 00:09:29.246 [2024-06-11 12:54:48.029846] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:29.246 [2024-06-11 12:54:48.030321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107849 ] 00:09:29.504 [2024-06-11 12:54:48.225328] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107818 has claimed it. 00:09:29.504 [2024-06-11 12:54:48.237535] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:30.071 ERROR: process (pid: 107849) is no longer running 00:09:30.071 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107849) - No such process 00:09:30.071 12:54:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:30.071 12:54:48 -- common/autotest_common.sh@852 -- # return 1 00:09:30.071 12:54:48 -- common/autotest_common.sh@643 -- # es=1 00:09:30.071 12:54:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:30.071 12:54:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:30.071 12:54:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:30.071 12:54:48 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:30.071 12:54:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:30.071 12:54:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:30.071 12:54:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:30.071 12:54:48 -- event/cpu_locks.sh@141 -- # killprocess 107818 00:09:30.071 12:54:48 -- common/autotest_common.sh@926 -- # '[' -z 107818 ']' 00:09:30.071 12:54:48 -- common/autotest_common.sh@930 -- # kill -0 107818 00:09:30.071 12:54:48 -- common/autotest_common.sh@931 -- # uname 00:09:30.071 12:54:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:30.071 12:54:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107818 00:09:30.071 killing process with pid 107818 00:09:30.071 12:54:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:30.071 12:54:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:30.071 12:54:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107818' 00:09:30.071 12:54:48 -- common/autotest_common.sh@945 -- # kill 107818 00:09:30.071 12:54:48 -- common/autotest_common.sh@950 -- # wait 107818 00:09:32.599 ************************************ 00:09:32.599 END TEST locking_overlapped_coremask 00:09:32.599 ************************************ 00:09:32.599 00:09:32.599 real 0m4.736s 00:09:32.599 user 0m12.980s 00:09:32.599 sys 0m0.612s 00:09:32.599 12:54:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.599 12:54:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.599 12:54:50 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:32.599 12:54:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:32.599 12:54:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.599 12:54:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.599 ************************************ 00:09:32.599 START TEST locking_overlapped_coremask_via_rpc 00:09:32.599 ************************************ 00:09:32.599 12:54:50 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:09:32.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.599 12:54:50 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=107934 00:09:32.599 12:54:50 -- event/cpu_locks.sh@149 -- # waitforlisten 107934 /var/tmp/spdk.sock 00:09:32.599 12:54:50 -- common/autotest_common.sh@819 -- # '[' -z 107934 ']' 00:09:32.599 12:54:50 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:32.599 12:54:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.599 12:54:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:32.599 12:54:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.599 12:54:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:32.599 12:54:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.599 [2024-06-11 12:54:51.070304] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:32.599 [2024-06-11 12:54:51.070821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107934 ] 00:09:32.599 [2024-06-11 12:54:51.254466] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:32.599 [2024-06-11 12:54:51.254750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.857 [2024-06-11 12:54:51.506572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:32.857 [2024-06-11 12:54:51.507316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.857 [2024-06-11 12:54:51.507503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.857 [2024-06-11 12:54:51.507511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.232 12:54:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.232 12:54:52 -- common/autotest_common.sh@852 -- # return 0 00:09:34.232 12:54:52 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:34.232 12:54:52 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=107966 00:09:34.232 12:54:52 -- event/cpu_locks.sh@153 -- # waitforlisten 107966 /var/tmp/spdk2.sock 00:09:34.232 12:54:52 -- common/autotest_common.sh@819 -- # '[' -z 107966 ']' 00:09:34.232 12:54:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.232 12:54:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.232 12:54:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.232 12:54:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.232 12:54:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.232 [2024-06-11 12:54:52.748564] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:34.232 [2024-06-11 12:54:52.748868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107966 ] 00:09:34.232 [2024-06-11 12:54:52.929562] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.232 [2024-06-11 12:54:52.949643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.490 [2024-06-11 12:54:53.327197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.490 [2024-06-11 12:54:53.327654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.748 [2024-06-11 12:54:53.341779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.748 [2024-06-11 12:54:53.341781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:36.696 12:54:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.696 12:54:55 -- common/autotest_common.sh@852 -- # return 0 00:09:36.696 12:54:55 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:36.696 12:54:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:36.696 12:54:55 -- common/autotest_common.sh@10 -- # set +x 00:09:36.696 12:54:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:36.696 12:54:55 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:36.696 12:54:55 -- common/autotest_common.sh@640 -- # local es=0 00:09:36.696 12:54:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:36.696 12:54:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:09:36.696 12:54:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:36.696 12:54:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:09:36.696 12:54:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:36.696 12:54:55 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:36.696 12:54:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:36.696 12:54:55 -- common/autotest_common.sh@10 -- # set +x 00:09:36.696 [2024-06-11 12:54:55.221714] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107934 has claimed it. 00:09:36.696 request: 00:09:36.696 { 00:09:36.696 "method": "framework_enable_cpumask_locks", 00:09:36.696 "req_id": 1 00:09:36.696 } 00:09:36.696 Got JSON-RPC error response 00:09:36.696 response: 00:09:36.696 { 00:09:36.696 "code": -32603, 00:09:36.696 "message": "Failed to claim CPU core: 2" 00:09:36.696 } 00:09:36.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.696 12:54:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:36.696 12:54:55 -- common/autotest_common.sh@643 -- # es=1 00:09:36.696 12:54:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:36.696 12:54:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:36.696 12:54:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:36.696 12:54:55 -- event/cpu_locks.sh@158 -- # waitforlisten 107934 /var/tmp/spdk.sock 00:09:36.696 12:54:55 -- common/autotest_common.sh@819 -- # '[' -z 107934 ']' 00:09:36.696 12:54:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.696 12:54:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:36.696 12:54:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.696 12:54:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:36.696 12:54:55 -- common/autotest_common.sh@10 -- # set +x 00:09:36.696 12:54:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.696 12:54:55 -- common/autotest_common.sh@852 -- # return 0 00:09:36.696 12:54:55 -- event/cpu_locks.sh@159 -- # waitforlisten 107966 /var/tmp/spdk2.sock 00:09:36.696 12:54:55 -- common/autotest_common.sh@819 -- # '[' -z 107966 ']' 00:09:36.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:36.696 12:54:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:36.696 12:54:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:36.696 12:54:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:36.696 12:54:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:36.696 12:54:55 -- common/autotest_common.sh@10 -- # set +x 00:09:36.954 ************************************ 00:09:36.954 END TEST locking_overlapped_coremask_via_rpc 00:09:36.954 ************************************ 00:09:36.954 12:54:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.954 12:54:55 -- common/autotest_common.sh@852 -- # return 0 00:09:36.954 12:54:55 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:36.954 12:54:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:36.954 12:54:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:36.954 12:54:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:36.954 00:09:36.954 real 0m4.713s 00:09:36.954 user 0m1.890s 00:09:36.954 sys 0m0.237s 00:09:36.954 12:54:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.954 12:54:55 -- common/autotest_common.sh@10 -- # set +x 00:09:36.954 12:54:55 -- event/cpu_locks.sh@174 -- # cleanup 00:09:36.954 12:54:55 -- event/cpu_locks.sh@15 -- # [[ -z 107934 ]] 00:09:36.954 12:54:55 -- event/cpu_locks.sh@15 -- # killprocess 107934 00:09:36.954 12:54:55 -- common/autotest_common.sh@926 -- # '[' -z 107934 ']' 00:09:36.954 12:54:55 -- common/autotest_common.sh@930 -- # kill -0 107934 00:09:36.954 12:54:55 -- common/autotest_common.sh@931 -- # uname 00:09:36.954 12:54:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:36.954 12:54:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107934 00:09:36.954 killing process with pid 107934 00:09:36.954 12:54:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:36.954 12:54:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:36.955 12:54:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107934' 00:09:36.955 12:54:55 -- common/autotest_common.sh@945 -- # kill 107934 00:09:36.955 12:54:55 -- common/autotest_common.sh@950 -- # wait 107934 00:09:39.485 12:54:57 -- event/cpu_locks.sh@16 -- # [[ -z 107966 ]] 00:09:39.485 12:54:57 -- event/cpu_locks.sh@16 -- # killprocess 107966 00:09:39.485 12:54:57 -- common/autotest_common.sh@926 -- # '[' -z 107966 ']' 00:09:39.485 12:54:57 -- common/autotest_common.sh@930 -- # kill -0 107966 00:09:39.485 12:54:57 -- common/autotest_common.sh@931 -- # uname 00:09:39.485 12:54:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:39.485 12:54:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107966 00:09:39.485 killing process with pid 107966 00:09:39.485 12:54:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:39.485 12:54:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:39.485 12:54:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107966' 00:09:39.485 12:54:57 -- common/autotest_common.sh@945 -- # kill 107966 00:09:39.485 12:54:57 -- common/autotest_common.sh@950 -- # wait 107966 00:09:41.389 12:54:59 -- event/cpu_locks.sh@18 -- # rm -f 00:09:41.389 Process with pid 107934 is not found 00:09:41.389 Process with pid 107966 is not found 00:09:41.389 12:54:59 -- event/cpu_locks.sh@1 -- # cleanup 00:09:41.389 12:54:59 -- event/cpu_locks.sh@15 -- # [[ -z 107934 ]] 00:09:41.389 12:54:59 -- event/cpu_locks.sh@15 -- # killprocess 107934 00:09:41.389 12:54:59 -- common/autotest_common.sh@926 -- # '[' -z 107934 ']' 00:09:41.389 12:54:59 -- common/autotest_common.sh@930 -- # kill -0 107934 00:09:41.389 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107934) - No such process 00:09:41.389 12:54:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107934 is not found' 00:09:41.389 12:54:59 -- event/cpu_locks.sh@16 -- # [[ -z 107966 ]] 00:09:41.389 12:54:59 -- event/cpu_locks.sh@16 -- # killprocess 107966 00:09:41.389 12:54:59 -- common/autotest_common.sh@926 -- # '[' -z 107966 ']' 00:09:41.389 12:54:59 -- common/autotest_common.sh@930 -- # kill -0 107966 00:09:41.389 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107966) - No such process 00:09:41.389 12:54:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107966 is not found' 00:09:41.389 12:54:59 -- event/cpu_locks.sh@18 -- # rm -f 00:09:41.389 ************************************ 00:09:41.389 END TEST cpu_locks 00:09:41.389 ************************************ 00:09:41.389 00:09:41.389 real 0m47.029s 00:09:41.389 user 1m22.680s 00:09:41.389 sys 0m6.442s 00:09:41.389 12:54:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.389 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:41.389 00:09:41.389 real 1m18.863s 00:09:41.389 user 2m23.852s 00:09:41.389 sys 0m10.108s 00:09:41.389 12:54:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.389 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:41.389 ************************************ 00:09:41.389 END TEST event 00:09:41.389 ************************************ 00:09:41.389 12:54:59 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:41.389 12:54:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:41.389 12:54:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.389 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:41.389 ************************************ 00:09:41.389 START TEST thread 00:09:41.389 ************************************ 00:09:41.389 12:54:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:41.389 * Looking for test storage... 00:09:41.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:41.389 12:54:59 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:41.389 12:54:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:41.389 12:54:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.389 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:09:41.389 ************************************ 00:09:41.389 START TEST thread_poller_perf 00:09:41.389 ************************************ 00:09:41.389 12:54:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:41.389 [2024-06-11 12:54:59.975964] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:41.389 [2024-06-11 12:54:59.976342] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108171 ] 00:09:41.389 [2024-06-11 12:55:00.143669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.648 [2024-06-11 12:55:00.310849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.648 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:43.025 ====================================== 00:09:43.025 busy:2211746822 (cyc) 00:09:43.025 total_run_count: 361000 00:09:43.025 tsc_hz: 2200000000 (cyc) 00:09:43.025 ====================================== 00:09:43.025 poller_cost: 6126 (cyc), 2784 (nsec) 00:09:43.025 ************************************ 00:09:43.025 END TEST thread_poller_perf 00:09:43.025 ************************************ 00:09:43.025 00:09:43.025 real 0m1.718s 00:09:43.025 user 0m1.505s 00:09:43.025 sys 0m0.112s 00:09:43.025 12:55:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.025 12:55:01 -- common/autotest_common.sh@10 -- # set +x 00:09:43.025 12:55:01 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:43.025 12:55:01 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:43.025 12:55:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:43.025 12:55:01 -- common/autotest_common.sh@10 -- # set +x 00:09:43.025 ************************************ 00:09:43.025 START TEST thread_poller_perf 00:09:43.025 ************************************ 00:09:43.025 12:55:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:43.025 [2024-06-11 12:55:01.734946] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:43.026 [2024-06-11 12:55:01.735204] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108214 ] 00:09:43.284 [2024-06-11 12:55:01.889961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.284 [2024-06-11 12:55:02.061415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.284 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:44.657 ====================================== 00:09:44.657 busy:2205271388 (cyc) 00:09:44.657 total_run_count: 4589000 00:09:44.657 tsc_hz: 2200000000 (cyc) 00:09:44.657 ====================================== 00:09:44.657 poller_cost: 480 (cyc), 218 (nsec) 00:09:44.657 ************************************ 00:09:44.657 END TEST thread_poller_perf 00:09:44.657 ************************************ 00:09:44.657 00:09:44.657 real 0m1.681s 00:09:44.657 user 0m1.490s 00:09:44.657 sys 0m0.089s 00:09:44.657 12:55:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.657 12:55:03 -- common/autotest_common.sh@10 -- # set +x 00:09:44.657 12:55:03 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:44.657 12:55:03 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:44.657 12:55:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:44.657 12:55:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.657 12:55:03 -- common/autotest_common.sh@10 -- # set +x 00:09:44.657 ************************************ 00:09:44.657 START TEST thread_spdk_lock 00:09:44.657 ************************************ 00:09:44.657 12:55:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:44.657 [2024-06-11 12:55:03.484526] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:44.657 [2024-06-11 12:55:03.484863] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108257 ] 00:09:44.915 [2024-06-11 12:55:03.656432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.173 [2024-06-11 12:55:03.846557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.173 [2024-06-11 12:55:03.846566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.740 [2024-06-11 12:55:04.368059] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:45.740 [2024-06-11 12:55:04.369483] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:45.740 [2024-06-11 12:55:04.369556] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55d16add7840 00:09:45.740 [2024-06-11 12:55:04.376407] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:45.740 [2024-06-11 12:55:04.376621] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:45.740 [2024-06-11 12:55:04.376762] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:45.999 Starting test contend 00:09:45.999 Worker Delay Wait us Hold us Total us 00:09:45.999 0 3 139376 190921 330297 00:09:45.999 1 5 60549 297554 358104 00:09:45.999 PASS test contend 00:09:45.999 Starting test hold_by_poller 00:09:45.999 PASS test hold_by_poller 00:09:45.999 Starting test hold_by_message 00:09:45.999 PASS test hold_by_message 00:09:45.999 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:45.999 100014 assertions passed 00:09:45.999 0 assertions failed 00:09:45.999 ************************************ 00:09:45.999 END TEST thread_spdk_lock 00:09:45.999 ************************************ 00:09:45.999 00:09:45.999 real 0m1.254s 00:09:45.999 user 0m1.555s 00:09:45.999 sys 0m0.128s 00:09:45.999 12:55:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.999 12:55:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.999 ************************************ 00:09:45.999 END TEST thread 00:09:45.999 ************************************ 00:09:45.999 00:09:45.999 real 0m4.879s 00:09:45.999 user 0m4.690s 00:09:45.999 sys 0m0.398s 00:09:45.999 12:55:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.999 12:55:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.999 12:55:04 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:45.999 12:55:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:45.999 12:55:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:45.999 12:55:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.999 ************************************ 00:09:45.999 START TEST accel 00:09:45.999 ************************************ 00:09:45.999 12:55:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:45.999 * Looking for test storage... 00:09:46.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:46.258 12:55:04 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:46.258 12:55:04 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:46.258 12:55:04 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:46.258 12:55:04 -- accel/accel.sh@59 -- # spdk_tgt_pid=108351 00:09:46.258 12:55:04 -- accel/accel.sh@60 -- # waitforlisten 108351 00:09:46.258 12:55:04 -- common/autotest_common.sh@819 -- # '[' -z 108351 ']' 00:09:46.258 12:55:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.258 12:55:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.258 12:55:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.258 12:55:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.258 12:55:04 -- common/autotest_common.sh@10 -- # set +x 00:09:46.259 12:55:04 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:46.259 12:55:04 -- accel/accel.sh@58 -- # build_accel_config 00:09:46.259 12:55:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:46.259 12:55:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:46.259 12:55:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:46.259 12:55:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:46.259 12:55:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:46.259 12:55:04 -- accel/accel.sh@41 -- # local IFS=, 00:09:46.259 12:55:04 -- accel/accel.sh@42 -- # jq -r . 00:09:46.259 [2024-06-11 12:55:04.905956] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:46.259 [2024-06-11 12:55:04.906335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108351 ] 00:09:46.259 [2024-06-11 12:55:05.060889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.520 [2024-06-11 12:55:05.253655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:46.520 [2024-06-11 12:55:05.254075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.898 12:55:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:47.898 12:55:06 -- common/autotest_common.sh@852 -- # return 0 00:09:47.898 12:55:06 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:47.898 12:55:06 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:47.898 12:55:06 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:47.898 12:55:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.898 12:55:06 -- common/autotest_common.sh@10 -- # set +x 00:09:47.898 12:55:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # IFS== 00:09:47.898 12:55:06 -- accel/accel.sh@64 -- # read -r opc module 00:09:47.898 12:55:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:47.898 12:55:06 -- accel/accel.sh@67 -- # killprocess 108351 00:09:47.898 12:55:06 -- common/autotest_common.sh@926 -- # '[' -z 108351 ']' 00:09:47.898 12:55:06 -- common/autotest_common.sh@930 -- # kill -0 108351 00:09:47.898 12:55:06 -- common/autotest_common.sh@931 -- # uname 00:09:47.898 12:55:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:47.898 12:55:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108351 00:09:47.898 12:55:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:47.898 12:55:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:47.898 12:55:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108351' 00:09:47.898 killing process with pid 108351 00:09:47.898 12:55:06 -- common/autotest_common.sh@945 -- # kill 108351 00:09:47.898 12:55:06 -- common/autotest_common.sh@950 -- # wait 108351 00:09:49.802 12:55:08 -- accel/accel.sh@68 -- # trap - ERR 00:09:49.802 12:55:08 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:49.802 12:55:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:49.802 12:55:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.802 12:55:08 -- common/autotest_common.sh@10 -- # set +x 00:09:49.802 12:55:08 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:09:49.802 12:55:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:49.802 12:55:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:49.802 12:55:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:49.802 12:55:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.802 12:55:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.802 12:55:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:49.802 12:55:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:49.802 12:55:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:49.802 12:55:08 -- accel/accel.sh@42 -- # jq -r . 00:09:49.802 12:55:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.802 12:55:08 -- common/autotest_common.sh@10 -- # set +x 00:09:49.802 12:55:08 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:49.802 12:55:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:49.802 12:55:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.802 12:55:08 -- common/autotest_common.sh@10 -- # set +x 00:09:50.061 ************************************ 00:09:50.061 START TEST accel_missing_filename 00:09:50.061 ************************************ 00:09:50.061 12:55:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:09:50.061 12:55:08 -- common/autotest_common.sh@640 -- # local es=0 00:09:50.061 12:55:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:50.061 12:55:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:50.061 12:55:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:50.061 12:55:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:50.061 12:55:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:50.061 12:55:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:09:50.061 12:55:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:50.061 12:55:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:50.061 12:55:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:50.061 12:55:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.061 12:55:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.061 12:55:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:50.061 12:55:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:50.061 12:55:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:50.061 12:55:08 -- accel/accel.sh@42 -- # jq -r . 00:09:50.061 [2024-06-11 12:55:08.687999] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:50.061 [2024-06-11 12:55:08.688448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108431 ] 00:09:50.061 [2024-06-11 12:55:08.856866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.320 [2024-06-11 12:55:09.045027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.580 [2024-06-11 12:55:09.227110] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:50.838 [2024-06-11 12:55:09.637279] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:51.406 A filename is required. 00:09:51.406 ************************************ 00:09:51.406 END TEST accel_missing_filename 00:09:51.406 ************************************ 00:09:51.406 12:55:09 -- common/autotest_common.sh@643 -- # es=234 00:09:51.406 12:55:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:51.406 12:55:09 -- common/autotest_common.sh@652 -- # es=106 00:09:51.407 12:55:09 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:51.407 12:55:09 -- common/autotest_common.sh@660 -- # es=1 00:09:51.407 12:55:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:51.407 00:09:51.407 real 0m1.322s 00:09:51.407 user 0m1.082s 00:09:51.407 sys 0m0.188s 00:09:51.407 12:55:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.407 12:55:09 -- common/autotest_common.sh@10 -- # set +x 00:09:51.407 12:55:10 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.407 12:55:10 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:51.407 12:55:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.407 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:09:51.407 ************************************ 00:09:51.407 START TEST accel_compress_verify 00:09:51.407 ************************************ 00:09:51.407 12:55:10 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.407 12:55:10 -- common/autotest_common.sh@640 -- # local es=0 00:09:51.407 12:55:10 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.407 12:55:10 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:51.407 12:55:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:51.407 12:55:10 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:51.407 12:55:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:51.407 12:55:10 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.407 12:55:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:51.407 12:55:10 -- accel/accel.sh@12 -- # build_accel_config 00:09:51.407 12:55:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:51.407 12:55:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.407 12:55:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.407 12:55:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:51.407 12:55:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:51.407 12:55:10 -- accel/accel.sh@41 -- # local IFS=, 00:09:51.407 12:55:10 -- accel/accel.sh@42 -- # jq -r . 00:09:51.407 [2024-06-11 12:55:10.060998] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:51.407 [2024-06-11 12:55:10.061392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108497 ] 00:09:51.407 [2024-06-11 12:55:10.225736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.666 [2024-06-11 12:55:10.408493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.925 [2024-06-11 12:55:10.577400] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:52.184 [2024-06-11 12:55:10.982257] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:52.752 00:09:52.752 Compression does not support the verify option, aborting. 00:09:52.752 ************************************ 00:09:52.752 END TEST accel_compress_verify 00:09:52.752 ************************************ 00:09:52.752 12:55:11 -- common/autotest_common.sh@643 -- # es=161 00:09:52.752 12:55:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:52.752 12:55:11 -- common/autotest_common.sh@652 -- # es=33 00:09:52.752 12:55:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:52.752 12:55:11 -- common/autotest_common.sh@660 -- # es=1 00:09:52.752 12:55:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:52.752 00:09:52.752 real 0m1.297s 00:09:52.752 user 0m1.066s 00:09:52.752 sys 0m0.177s 00:09:52.752 12:55:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.752 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:52.752 12:55:11 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:52.752 12:55:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:52.752 12:55:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.752 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:52.752 ************************************ 00:09:52.752 START TEST accel_wrong_workload 00:09:52.752 ************************************ 00:09:52.752 12:55:11 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:09:52.752 12:55:11 -- common/autotest_common.sh@640 -- # local es=0 00:09:52.752 12:55:11 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:52.752 12:55:11 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:52.752 12:55:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:52.752 12:55:11 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:52.752 12:55:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:52.752 12:55:11 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:09:52.752 12:55:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:52.752 12:55:11 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.752 12:55:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.752 12:55:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.752 12:55:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.752 12:55:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.752 12:55:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.752 12:55:11 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.752 12:55:11 -- accel/accel.sh@42 -- # jq -r . 00:09:52.752 Unsupported workload type: foobar 00:09:52.752 [2024-06-11 12:55:11.400484] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:52.752 accel_perf options: 00:09:52.752 [-h help message] 00:09:52.752 [-q queue depth per core] 00:09:52.752 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:52.752 [-T number of threads per core 00:09:52.753 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:52.753 [-t time in seconds] 00:09:52.753 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:52.753 [ dif_verify, , dif_generate, dif_generate_copy 00:09:52.753 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:52.753 [-l for compress/decompress workloads, name of uncompressed input file 00:09:52.753 [-S for crc32c workload, use this seed value (default 0) 00:09:52.753 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:52.753 [-f for fill workload, use this BYTE value (default 255) 00:09:52.753 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:52.753 [-y verify result if this switch is on] 00:09:52.753 [-a tasks to allocate per core (default: same value as -q)] 00:09:52.753 Can be used to spread operations across a wider range of memory. 00:09:52.753 12:55:11 -- common/autotest_common.sh@643 -- # es=1 00:09:52.753 12:55:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:52.753 12:55:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:52.753 12:55:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:52.753 00:09:52.753 real 0m0.067s 00:09:52.753 user 0m0.078s 00:09:52.753 sys 0m0.043s 00:09:52.753 12:55:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.753 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:52.753 ************************************ 00:09:52.753 END TEST accel_wrong_workload 00:09:52.753 ************************************ 00:09:52.753 12:55:11 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:52.753 12:55:11 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:52.753 12:55:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.753 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:52.753 ************************************ 00:09:52.753 START TEST accel_negative_buffers 00:09:52.753 ************************************ 00:09:52.753 12:55:11 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:52.753 12:55:11 -- common/autotest_common.sh@640 -- # local es=0 00:09:52.753 12:55:11 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:52.753 12:55:11 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:52.753 12:55:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:52.753 12:55:11 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:52.753 12:55:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:52.753 12:55:11 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:09:52.753 12:55:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:52.753 12:55:11 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.753 12:55:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.753 12:55:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.753 12:55:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.753 12:55:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.753 12:55:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.753 12:55:11 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.753 12:55:11 -- accel/accel.sh@42 -- # jq -r . 00:09:52.753 -x option must be non-negative. 00:09:52.753 [2024-06-11 12:55:11.515787] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:52.753 accel_perf options: 00:09:52.753 [-h help message] 00:09:52.753 [-q queue depth per core] 00:09:52.753 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:52.753 [-T number of threads per core 00:09:52.753 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:52.753 [-t time in seconds] 00:09:52.753 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:52.753 [ dif_verify, , dif_generate, dif_generate_copy 00:09:52.753 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:52.753 [-l for compress/decompress workloads, name of uncompressed input file 00:09:52.753 [-S for crc32c workload, use this seed value (default 0) 00:09:52.753 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:52.753 [-f for fill workload, use this BYTE value (default 255) 00:09:52.753 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:52.753 [-y verify result if this switch is on] 00:09:52.753 [-a tasks to allocate per core (default: same value as -q)] 00:09:52.753 Can be used to spread operations across a wider range of memory. 00:09:52.753 12:55:11 -- common/autotest_common.sh@643 -- # es=1 00:09:52.753 12:55:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:52.753 12:55:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:52.753 12:55:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:52.753 00:09:52.753 real 0m0.067s 00:09:52.753 user 0m0.095s 00:09:52.753 sys 0m0.026s 00:09:52.753 12:55:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.753 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:52.753 ************************************ 00:09:52.753 END TEST accel_negative_buffers 00:09:52.753 ************************************ 00:09:52.753 12:55:11 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:52.753 12:55:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:52.753 12:55:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.753 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:52.753 ************************************ 00:09:52.753 START TEST accel_crc32c 00:09:52.753 ************************************ 00:09:52.753 12:55:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:52.753 12:55:11 -- accel/accel.sh@16 -- # local accel_opc 00:09:53.012 12:55:11 -- accel/accel.sh@17 -- # local accel_module 00:09:53.012 12:55:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:53.012 12:55:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:53.012 12:55:11 -- accel/accel.sh@12 -- # build_accel_config 00:09:53.012 12:55:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:53.012 12:55:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.012 12:55:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.012 12:55:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:53.012 12:55:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:53.012 12:55:11 -- accel/accel.sh@41 -- # local IFS=, 00:09:53.012 12:55:11 -- accel/accel.sh@42 -- # jq -r . 00:09:53.012 [2024-06-11 12:55:11.631467] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:53.012 [2024-06-11 12:55:11.631809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108587 ] 00:09:53.012 [2024-06-11 12:55:11.797821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.271 [2024-06-11 12:55:11.991446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.173 12:55:13 -- accel/accel.sh@18 -- # out=' 00:09:55.173 SPDK Configuration: 00:09:55.173 Core mask: 0x1 00:09:55.173 00:09:55.173 Accel Perf Configuration: 00:09:55.173 Workload Type: crc32c 00:09:55.173 CRC-32C seed: 32 00:09:55.173 Transfer size: 4096 bytes 00:09:55.173 Vector count 1 00:09:55.173 Module: software 00:09:55.173 Queue depth: 32 00:09:55.173 Allocate depth: 32 00:09:55.173 # threads/core: 1 00:09:55.173 Run time: 1 seconds 00:09:55.173 Verify: Yes 00:09:55.173 00:09:55.173 Running for 1 seconds... 00:09:55.173 00:09:55.173 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:55.173 ------------------------------------------------------------------------------------ 00:09:55.173 0,0 502976/s 1964 MiB/s 0 0 00:09:55.173 ==================================================================================== 00:09:55.173 Total 502976/s 1964 MiB/s 0 0' 00:09:55.173 12:55:13 -- accel/accel.sh@20 -- # IFS=: 00:09:55.173 12:55:13 -- accel/accel.sh@20 -- # read -r var val 00:09:55.173 12:55:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:55.173 12:55:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:55.173 12:55:13 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.173 12:55:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.173 12:55:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.173 12:55:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.173 12:55:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.173 12:55:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.173 12:55:13 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.173 12:55:13 -- accel/accel.sh@42 -- # jq -r . 00:09:55.173 [2024-06-11 12:55:13.966794] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:55.173 [2024-06-11 12:55:13.967155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108621 ] 00:09:55.432 [2024-06-11 12:55:14.133864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.691 [2024-06-11 12:55:14.328436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val= 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val= 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val=0x1 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val= 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val= 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val=crc32c 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val=32 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val= 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val=software 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@23 -- # accel_module=software 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val=32 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val=32 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val=1 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val=Yes 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val= 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:55.691 12:55:14 -- accel/accel.sh@21 -- # val= 00:09:55.691 12:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # IFS=: 00:09:55.691 12:55:14 -- accel/accel.sh@20 -- # read -r var val 00:09:57.603 12:55:16 -- accel/accel.sh@21 -- # val= 00:09:57.603 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:09:57.603 12:55:16 -- accel/accel.sh@21 -- # val= 00:09:57.603 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:09:57.603 12:55:16 -- accel/accel.sh@21 -- # val= 00:09:57.603 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:09:57.603 12:55:16 -- accel/accel.sh@21 -- # val= 00:09:57.603 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:09:57.603 12:55:16 -- accel/accel.sh@21 -- # val= 00:09:57.603 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:09:57.603 12:55:16 -- accel/accel.sh@21 -- # val= 00:09:57.603 12:55:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # IFS=: 00:09:57.603 12:55:16 -- accel/accel.sh@20 -- # read -r var val 00:09:57.603 ************************************ 00:09:57.603 END TEST accel_crc32c 00:09:57.603 ************************************ 00:09:57.603 12:55:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:57.603 12:55:16 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:57.603 12:55:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:57.603 00:09:57.603 real 0m4.666s 00:09:57.603 user 0m4.165s 00:09:57.603 sys 0m0.351s 00:09:57.603 12:55:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.603 12:55:16 -- common/autotest_common.sh@10 -- # set +x 00:09:57.603 12:55:16 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:57.603 12:55:16 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:57.603 12:55:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:57.603 12:55:16 -- common/autotest_common.sh@10 -- # set +x 00:09:57.603 ************************************ 00:09:57.603 START TEST accel_crc32c_C2 00:09:57.603 ************************************ 00:09:57.603 12:55:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:57.603 12:55:16 -- accel/accel.sh@16 -- # local accel_opc 00:09:57.603 12:55:16 -- accel/accel.sh@17 -- # local accel_module 00:09:57.603 12:55:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:57.603 12:55:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:57.603 12:55:16 -- accel/accel.sh@12 -- # build_accel_config 00:09:57.603 12:55:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:57.603 12:55:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.603 12:55:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.603 12:55:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:57.603 12:55:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:57.603 12:55:16 -- accel/accel.sh@41 -- # local IFS=, 00:09:57.603 12:55:16 -- accel/accel.sh@42 -- # jq -r . 00:09:57.603 [2024-06-11 12:55:16.354027] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:57.603 [2024-06-11 12:55:16.354376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108675 ] 00:09:57.886 [2024-06-11 12:55:16.521480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.145 [2024-06-11 12:55:16.730053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.049 12:55:18 -- accel/accel.sh@18 -- # out=' 00:10:00.049 SPDK Configuration: 00:10:00.049 Core mask: 0x1 00:10:00.049 00:10:00.049 Accel Perf Configuration: 00:10:00.049 Workload Type: crc32c 00:10:00.049 CRC-32C seed: 0 00:10:00.049 Transfer size: 4096 bytes 00:10:00.049 Vector count 2 00:10:00.049 Module: software 00:10:00.049 Queue depth: 32 00:10:00.049 Allocate depth: 32 00:10:00.049 # threads/core: 1 00:10:00.049 Run time: 1 seconds 00:10:00.049 Verify: Yes 00:10:00.049 00:10:00.049 Running for 1 seconds... 00:10:00.049 00:10:00.049 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:00.049 ------------------------------------------------------------------------------------ 00:10:00.049 0,0 377984/s 2953 MiB/s 0 0 00:10:00.049 ==================================================================================== 00:10:00.049 Total 377984/s 1476 MiB/s 0 0' 00:10:00.049 12:55:18 -- accel/accel.sh@20 -- # IFS=: 00:10:00.049 12:55:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:00.049 12:55:18 -- accel/accel.sh@20 -- # read -r var val 00:10:00.049 12:55:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:00.049 12:55:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:00.049 12:55:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:00.049 12:55:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.049 12:55:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.049 12:55:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:00.049 12:55:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:00.049 12:55:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:00.049 12:55:18 -- accel/accel.sh@42 -- # jq -r . 00:10:00.049 [2024-06-11 12:55:18.721917] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:00.049 [2024-06-11 12:55:18.723047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108709 ] 00:10:00.309 [2024-06-11 12:55:18.891667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.309 [2024-06-11 12:55:19.090093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val= 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val= 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val=0x1 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val= 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val= 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val=crc32c 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val=0 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val= 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val=software 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@23 -- # accel_module=software 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val=32 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val=32 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val=1 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val=Yes 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val= 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:00.568 12:55:19 -- accel/accel.sh@21 -- # val= 00:10:00.568 12:55:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # IFS=: 00:10:00.568 12:55:19 -- accel/accel.sh@20 -- # read -r var val 00:10:02.469 12:55:21 -- accel/accel.sh@21 -- # val= 00:10:02.469 12:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # IFS=: 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # read -r var val 00:10:02.469 12:55:21 -- accel/accel.sh@21 -- # val= 00:10:02.469 12:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # IFS=: 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # read -r var val 00:10:02.469 12:55:21 -- accel/accel.sh@21 -- # val= 00:10:02.469 12:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # IFS=: 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # read -r var val 00:10:02.469 12:55:21 -- accel/accel.sh@21 -- # val= 00:10:02.469 12:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # IFS=: 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # read -r var val 00:10:02.469 12:55:21 -- accel/accel.sh@21 -- # val= 00:10:02.469 12:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # IFS=: 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # read -r var val 00:10:02.469 12:55:21 -- accel/accel.sh@21 -- # val= 00:10:02.469 12:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # IFS=: 00:10:02.469 12:55:21 -- accel/accel.sh@20 -- # read -r var val 00:10:02.469 ************************************ 00:10:02.469 END TEST accel_crc32c_C2 00:10:02.469 ************************************ 00:10:02.469 12:55:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:02.469 12:55:21 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:02.469 12:55:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:02.469 00:10:02.469 real 0m4.725s 00:10:02.469 user 0m4.180s 00:10:02.469 sys 0m0.393s 00:10:02.469 12:55:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.469 12:55:21 -- common/autotest_common.sh@10 -- # set +x 00:10:02.469 12:55:21 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:02.469 12:55:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:02.469 12:55:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.469 12:55:21 -- common/autotest_common.sh@10 -- # set +x 00:10:02.469 ************************************ 00:10:02.469 START TEST accel_copy 00:10:02.469 ************************************ 00:10:02.469 12:55:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:02.469 12:55:21 -- accel/accel.sh@16 -- # local accel_opc 00:10:02.469 12:55:21 -- accel/accel.sh@17 -- # local accel_module 00:10:02.469 12:55:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:02.469 12:55:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:02.469 12:55:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.469 12:55:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:02.469 12:55:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.469 12:55:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.469 12:55:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:02.469 12:55:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:02.469 12:55:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:02.469 12:55:21 -- accel/accel.sh@42 -- # jq -r . 00:10:02.469 [2024-06-11 12:55:21.131809] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:02.469 [2024-06-11 12:55:21.132004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108779 ] 00:10:02.469 [2024-06-11 12:55:21.297243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.728 [2024-06-11 12:55:21.480343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.629 12:55:23 -- accel/accel.sh@18 -- # out=' 00:10:04.629 SPDK Configuration: 00:10:04.629 Core mask: 0x1 00:10:04.629 00:10:04.629 Accel Perf Configuration: 00:10:04.629 Workload Type: copy 00:10:04.629 Transfer size: 4096 bytes 00:10:04.629 Vector count 1 00:10:04.629 Module: software 00:10:04.629 Queue depth: 32 00:10:04.629 Allocate depth: 32 00:10:04.629 # threads/core: 1 00:10:04.629 Run time: 1 seconds 00:10:04.629 Verify: Yes 00:10:04.629 00:10:04.629 Running for 1 seconds... 00:10:04.629 00:10:04.629 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:04.629 ------------------------------------------------------------------------------------ 00:10:04.629 0,0 295392/s 1153 MiB/s 0 0 00:10:04.629 ==================================================================================== 00:10:04.629 Total 295392/s 1153 MiB/s 0 0' 00:10:04.629 12:55:23 -- accel/accel.sh@20 -- # IFS=: 00:10:04.629 12:55:23 -- accel/accel.sh@20 -- # read -r var val 00:10:04.629 12:55:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:04.629 12:55:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:04.629 12:55:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.629 12:55:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.629 12:55:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.629 12:55:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.629 12:55:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.629 12:55:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.629 12:55:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.629 12:55:23 -- accel/accel.sh@42 -- # jq -r . 00:10:04.887 [2024-06-11 12:55:23.470974] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:04.887 [2024-06-11 12:55:23.471147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108820 ] 00:10:04.887 [2024-06-11 12:55:23.640588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.146 [2024-06-11 12:55:23.845484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.405 12:55:24 -- accel/accel.sh@21 -- # val= 00:10:05.405 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.405 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val= 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val=0x1 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val= 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val= 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val=copy 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val= 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val=software 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@23 -- # accel_module=software 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val=32 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val=32 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val=1 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val=Yes 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val= 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:05.406 12:55:24 -- accel/accel.sh@21 -- # val= 00:10:05.406 12:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # IFS=: 00:10:05.406 12:55:24 -- accel/accel.sh@20 -- # read -r var val 00:10:07.310 12:55:25 -- accel/accel.sh@21 -- # val= 00:10:07.310 12:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # IFS=: 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # read -r var val 00:10:07.310 12:55:25 -- accel/accel.sh@21 -- # val= 00:10:07.310 12:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # IFS=: 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # read -r var val 00:10:07.310 12:55:25 -- accel/accel.sh@21 -- # val= 00:10:07.310 12:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # IFS=: 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # read -r var val 00:10:07.310 12:55:25 -- accel/accel.sh@21 -- # val= 00:10:07.310 12:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # IFS=: 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # read -r var val 00:10:07.310 12:55:25 -- accel/accel.sh@21 -- # val= 00:10:07.310 12:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # IFS=: 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # read -r var val 00:10:07.310 12:55:25 -- accel/accel.sh@21 -- # val= 00:10:07.310 12:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # IFS=: 00:10:07.310 12:55:25 -- accel/accel.sh@20 -- # read -r var val 00:10:07.310 12:55:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:07.310 12:55:25 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:07.310 12:55:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:07.310 ************************************ 00:10:07.310 END TEST accel_copy 00:10:07.310 ************************************ 00:10:07.310 00:10:07.310 real 0m4.721s 00:10:07.310 user 0m4.238s 00:10:07.310 sys 0m0.361s 00:10:07.310 12:55:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.310 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:10:07.310 12:55:25 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:07.310 12:55:25 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:07.310 12:55:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.310 12:55:25 -- common/autotest_common.sh@10 -- # set +x 00:10:07.310 ************************************ 00:10:07.310 START TEST accel_fill 00:10:07.310 ************************************ 00:10:07.310 12:55:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:07.310 12:55:25 -- accel/accel.sh@16 -- # local accel_opc 00:10:07.310 12:55:25 -- accel/accel.sh@17 -- # local accel_module 00:10:07.310 12:55:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:07.310 12:55:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:07.310 12:55:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:07.310 12:55:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:07.310 12:55:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.310 12:55:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.310 12:55:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:07.310 12:55:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:07.310 12:55:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:07.310 12:55:25 -- accel/accel.sh@42 -- # jq -r . 00:10:07.310 [2024-06-11 12:55:25.900191] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:07.310 [2024-06-11 12:55:25.900336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108868 ] 00:10:07.310 [2024-06-11 12:55:26.054046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.569 [2024-06-11 12:55:26.244443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.473 12:55:28 -- accel/accel.sh@18 -- # out=' 00:10:09.473 SPDK Configuration: 00:10:09.473 Core mask: 0x1 00:10:09.473 00:10:09.473 Accel Perf Configuration: 00:10:09.473 Workload Type: fill 00:10:09.473 Fill pattern: 0x80 00:10:09.473 Transfer size: 4096 bytes 00:10:09.473 Vector count 1 00:10:09.473 Module: software 00:10:09.473 Queue depth: 64 00:10:09.473 Allocate depth: 64 00:10:09.473 # threads/core: 1 00:10:09.473 Run time: 1 seconds 00:10:09.473 Verify: Yes 00:10:09.473 00:10:09.473 Running for 1 seconds... 00:10:09.473 00:10:09.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:09.473 ------------------------------------------------------------------------------------ 00:10:09.473 0,0 437248/s 1708 MiB/s 0 0 00:10:09.473 ==================================================================================== 00:10:09.473 Total 437248/s 1708 MiB/s 0 0' 00:10:09.473 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.473 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.473 12:55:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:09.473 12:55:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:09.473 12:55:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.473 12:55:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.473 12:55:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.473 12:55:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.473 12:55:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.473 12:55:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.473 12:55:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.473 12:55:28 -- accel/accel.sh@42 -- # jq -r . 00:10:09.473 [2024-06-11 12:55:28.225778] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:09.473 [2024-06-11 12:55:28.225997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108902 ] 00:10:09.731 [2024-06-11 12:55:28.394676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.990 [2024-06-11 12:55:28.589903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val= 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val= 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val=0x1 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val= 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val= 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val=fill 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val=0x80 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val= 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val=software 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@23 -- # accel_module=software 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val=64 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val=64 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val=1 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val=Yes 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val= 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:09.990 12:55:28 -- accel/accel.sh@21 -- # val= 00:10:09.990 12:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # IFS=: 00:10:09.990 12:55:28 -- accel/accel.sh@20 -- # read -r var val 00:10:11.892 12:55:30 -- accel/accel.sh@21 -- # val= 00:10:11.892 12:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # IFS=: 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # read -r var val 00:10:11.892 12:55:30 -- accel/accel.sh@21 -- # val= 00:10:11.892 12:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # IFS=: 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # read -r var val 00:10:11.892 12:55:30 -- accel/accel.sh@21 -- # val= 00:10:11.892 12:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # IFS=: 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # read -r var val 00:10:11.892 12:55:30 -- accel/accel.sh@21 -- # val= 00:10:11.892 12:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # IFS=: 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # read -r var val 00:10:11.892 12:55:30 -- accel/accel.sh@21 -- # val= 00:10:11.892 12:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # IFS=: 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # read -r var val 00:10:11.892 12:55:30 -- accel/accel.sh@21 -- # val= 00:10:11.892 12:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # IFS=: 00:10:11.892 12:55:30 -- accel/accel.sh@20 -- # read -r var val 00:10:11.892 12:55:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:11.892 12:55:30 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:11.892 12:55:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:11.892 00:10:11.892 real 0m4.673s 00:10:11.892 user 0m4.184s 00:10:11.893 sys 0m0.355s 00:10:11.893 12:55:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.893 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:10:11.893 ************************************ 00:10:11.893 END TEST accel_fill 00:10:11.893 ************************************ 00:10:11.893 12:55:30 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:11.893 12:55:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:11.893 12:55:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.893 12:55:30 -- common/autotest_common.sh@10 -- # set +x 00:10:11.893 ************************************ 00:10:11.893 START TEST accel_copy_crc32c 00:10:11.893 ************************************ 00:10:11.893 12:55:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:11.893 12:55:30 -- accel/accel.sh@16 -- # local accel_opc 00:10:11.893 12:55:30 -- accel/accel.sh@17 -- # local accel_module 00:10:11.893 12:55:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:11.893 12:55:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:11.893 12:55:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.893 12:55:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.893 12:55:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.893 12:55:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.893 12:55:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.893 12:55:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.893 12:55:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.893 12:55:30 -- accel/accel.sh@42 -- # jq -r . 00:10:11.893 [2024-06-11 12:55:30.624516] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:11.893 [2024-06-11 12:55:30.624680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108972 ] 00:10:12.152 [2024-06-11 12:55:30.775273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.152 [2024-06-11 12:55:30.973514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.683 12:55:32 -- accel/accel.sh@18 -- # out=' 00:10:14.683 SPDK Configuration: 00:10:14.683 Core mask: 0x1 00:10:14.683 00:10:14.683 Accel Perf Configuration: 00:10:14.683 Workload Type: copy_crc32c 00:10:14.683 CRC-32C seed: 0 00:10:14.683 Vector size: 4096 bytes 00:10:14.683 Transfer size: 4096 bytes 00:10:14.683 Vector count 1 00:10:14.683 Module: software 00:10:14.683 Queue depth: 32 00:10:14.683 Allocate depth: 32 00:10:14.683 # threads/core: 1 00:10:14.683 Run time: 1 seconds 00:10:14.683 Verify: Yes 00:10:14.683 00:10:14.683 Running for 1 seconds... 00:10:14.683 00:10:14.683 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:14.683 ------------------------------------------------------------------------------------ 00:10:14.683 0,0 242208/s 946 MiB/s 0 0 00:10:14.683 ==================================================================================== 00:10:14.683 Total 242208/s 946 MiB/s 0 0' 00:10:14.683 12:55:32 -- accel/accel.sh@20 -- # IFS=: 00:10:14.683 12:55:32 -- accel/accel.sh@20 -- # read -r var val 00:10:14.683 12:55:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:14.683 12:55:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.683 12:55:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:14.683 12:55:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.683 12:55:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.683 12:55:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.683 12:55:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.683 12:55:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.683 12:55:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.683 12:55:32 -- accel/accel.sh@42 -- # jq -r . 00:10:14.683 [2024-06-11 12:55:32.992797] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:14.683 [2024-06-11 12:55:32.992956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109006 ] 00:10:14.683 [2024-06-11 12:55:33.149752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.683 [2024-06-11 12:55:33.350647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val= 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val= 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val=0x1 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val= 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val= 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val=0 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val= 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val=software 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@23 -- # accel_module=software 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val=32 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val=32 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val=1 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val=Yes 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val= 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:14.942 12:55:33 -- accel/accel.sh@21 -- # val= 00:10:14.942 12:55:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # IFS=: 00:10:14.942 12:55:33 -- accel/accel.sh@20 -- # read -r var val 00:10:16.846 12:55:35 -- accel/accel.sh@21 -- # val= 00:10:16.846 12:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # IFS=: 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # read -r var val 00:10:16.846 12:55:35 -- accel/accel.sh@21 -- # val= 00:10:16.846 12:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # IFS=: 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # read -r var val 00:10:16.846 12:55:35 -- accel/accel.sh@21 -- # val= 00:10:16.846 12:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # IFS=: 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # read -r var val 00:10:16.846 12:55:35 -- accel/accel.sh@21 -- # val= 00:10:16.846 12:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # IFS=: 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # read -r var val 00:10:16.846 12:55:35 -- accel/accel.sh@21 -- # val= 00:10:16.846 12:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # IFS=: 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # read -r var val 00:10:16.846 12:55:35 -- accel/accel.sh@21 -- # val= 00:10:16.846 12:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # IFS=: 00:10:16.846 12:55:35 -- accel/accel.sh@20 -- # read -r var val 00:10:16.846 12:55:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:16.846 12:55:35 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:16.846 ************************************ 00:10:16.846 END TEST accel_copy_crc32c 00:10:16.846 12:55:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.847 00:10:16.847 real 0m4.722s 00:10:16.847 user 0m4.253s 00:10:16.847 sys 0m0.340s 00:10:16.847 12:55:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.847 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:10:16.847 ************************************ 00:10:16.847 12:55:35 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:16.847 12:55:35 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:16.847 12:55:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.847 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:10:16.847 ************************************ 00:10:16.847 START TEST accel_copy_crc32c_C2 00:10:16.847 ************************************ 00:10:16.847 12:55:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:16.847 12:55:35 -- accel/accel.sh@16 -- # local accel_opc 00:10:16.847 12:55:35 -- accel/accel.sh@17 -- # local accel_module 00:10:16.847 12:55:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:16.847 12:55:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:16.847 12:55:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.847 12:55:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.847 12:55:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.847 12:55:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.847 12:55:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.847 12:55:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.847 12:55:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.847 12:55:35 -- accel/accel.sh@42 -- # jq -r . 00:10:16.847 [2024-06-11 12:55:35.393945] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:16.847 [2024-06-11 12:55:35.394269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109060 ] 00:10:16.847 [2024-06-11 12:55:35.550552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.105 [2024-06-11 12:55:35.744101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.008 12:55:37 -- accel/accel.sh@18 -- # out=' 00:10:19.009 SPDK Configuration: 00:10:19.009 Core mask: 0x1 00:10:19.009 00:10:19.009 Accel Perf Configuration: 00:10:19.009 Workload Type: copy_crc32c 00:10:19.009 CRC-32C seed: 0 00:10:19.009 Vector size: 4096 bytes 00:10:19.009 Transfer size: 8192 bytes 00:10:19.009 Vector count 2 00:10:19.009 Module: software 00:10:19.009 Queue depth: 32 00:10:19.009 Allocate depth: 32 00:10:19.009 # threads/core: 1 00:10:19.009 Run time: 1 seconds 00:10:19.009 Verify: Yes 00:10:19.009 00:10:19.009 Running for 1 seconds... 00:10:19.009 00:10:19.009 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:19.009 ------------------------------------------------------------------------------------ 00:10:19.009 0,0 172448/s 1347 MiB/s 0 0 00:10:19.009 ==================================================================================== 00:10:19.009 Total 172448/s 673 MiB/s 0 0' 00:10:19.009 12:55:37 -- accel/accel.sh@20 -- # IFS=: 00:10:19.009 12:55:37 -- accel/accel.sh@20 -- # read -r var val 00:10:19.009 12:55:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:19.009 12:55:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:19.009 12:55:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.009 12:55:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.009 12:55:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.009 12:55:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.009 12:55:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.009 12:55:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.009 12:55:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.009 12:55:37 -- accel/accel.sh@42 -- # jq -r . 00:10:19.009 [2024-06-11 12:55:37.755746] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:19.009 [2024-06-11 12:55:37.756101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109095 ] 00:10:19.268 [2024-06-11 12:55:37.912408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.268 [2024-06-11 12:55:38.100460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.526 12:55:38 -- accel/accel.sh@21 -- # val= 00:10:19.526 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.526 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.526 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.526 12:55:38 -- accel/accel.sh@21 -- # val= 00:10:19.526 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.526 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.526 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.526 12:55:38 -- accel/accel.sh@21 -- # val=0x1 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val= 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val= 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val=0 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val= 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val=software 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@23 -- # accel_module=software 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val=32 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val=32 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val=1 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val=Yes 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val= 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:19.527 12:55:38 -- accel/accel.sh@21 -- # val= 00:10:19.527 12:55:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # IFS=: 00:10:19.527 12:55:38 -- accel/accel.sh@20 -- # read -r var val 00:10:21.441 12:55:40 -- accel/accel.sh@21 -- # val= 00:10:21.441 12:55:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # IFS=: 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # read -r var val 00:10:21.441 12:55:40 -- accel/accel.sh@21 -- # val= 00:10:21.441 12:55:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # IFS=: 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # read -r var val 00:10:21.441 12:55:40 -- accel/accel.sh@21 -- # val= 00:10:21.441 12:55:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # IFS=: 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # read -r var val 00:10:21.441 12:55:40 -- accel/accel.sh@21 -- # val= 00:10:21.441 12:55:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # IFS=: 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # read -r var val 00:10:21.441 12:55:40 -- accel/accel.sh@21 -- # val= 00:10:21.441 12:55:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # IFS=: 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # read -r var val 00:10:21.441 12:55:40 -- accel/accel.sh@21 -- # val= 00:10:21.441 12:55:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # IFS=: 00:10:21.441 12:55:40 -- accel/accel.sh@20 -- # read -r var val 00:10:21.441 ************************************ 00:10:21.441 END TEST accel_copy_crc32c_C2 00:10:21.441 ************************************ 00:10:21.441 12:55:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:21.441 12:55:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:21.441 12:55:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:21.441 00:10:21.441 real 0m4.695s 00:10:21.441 user 0m4.251s 00:10:21.441 sys 0m0.309s 00:10:21.441 12:55:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.441 12:55:40 -- common/autotest_common.sh@10 -- # set +x 00:10:21.441 12:55:40 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:21.441 12:55:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:21.441 12:55:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:21.441 12:55:40 -- common/autotest_common.sh@10 -- # set +x 00:10:21.441 ************************************ 00:10:21.441 START TEST accel_dualcast 00:10:21.441 ************************************ 00:10:21.441 12:55:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:10:21.442 12:55:40 -- accel/accel.sh@16 -- # local accel_opc 00:10:21.442 12:55:40 -- accel/accel.sh@17 -- # local accel_module 00:10:21.442 12:55:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:21.442 12:55:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:21.442 12:55:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.442 12:55:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.442 12:55:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.442 12:55:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.442 12:55:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.442 12:55:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.442 12:55:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.442 12:55:40 -- accel/accel.sh@42 -- # jq -r . 00:10:21.442 [2024-06-11 12:55:40.139469] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:21.442 [2024-06-11 12:55:40.140224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109146 ] 00:10:21.700 [2024-06-11 12:55:40.293292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.700 [2024-06-11 12:55:40.508371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.230 12:55:42 -- accel/accel.sh@18 -- # out=' 00:10:24.230 SPDK Configuration: 00:10:24.230 Core mask: 0x1 00:10:24.230 00:10:24.230 Accel Perf Configuration: 00:10:24.230 Workload Type: dualcast 00:10:24.230 Transfer size: 4096 bytes 00:10:24.230 Vector count 1 00:10:24.230 Module: software 00:10:24.230 Queue depth: 32 00:10:24.230 Allocate depth: 32 00:10:24.230 # threads/core: 1 00:10:24.230 Run time: 1 seconds 00:10:24.230 Verify: Yes 00:10:24.230 00:10:24.230 Running for 1 seconds... 00:10:24.230 00:10:24.230 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:24.230 ------------------------------------------------------------------------------------ 00:10:24.230 0,0 311360/s 1216 MiB/s 0 0 00:10:24.230 ==================================================================================== 00:10:24.230 Total 311360/s 1216 MiB/s 0 0' 00:10:24.230 12:55:42 -- accel/accel.sh@20 -- # IFS=: 00:10:24.230 12:55:42 -- accel/accel.sh@20 -- # read -r var val 00:10:24.230 12:55:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:24.230 12:55:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:24.230 12:55:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.230 12:55:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.230 12:55:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.230 12:55:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.230 12:55:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.230 12:55:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.230 12:55:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.230 12:55:42 -- accel/accel.sh@42 -- # jq -r . 00:10:24.230 [2024-06-11 12:55:42.521232] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:24.230 [2024-06-11 12:55:42.521616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109208 ] 00:10:24.230 [2024-06-11 12:55:42.689159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.230 [2024-06-11 12:55:42.877192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val= 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val= 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val=0x1 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val= 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val= 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val=dualcast 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val= 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val=software 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@23 -- # accel_module=software 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val=32 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val=32 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val=1 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val=Yes 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val= 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:24.489 12:55:43 -- accel/accel.sh@21 -- # val= 00:10:24.489 12:55:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # IFS=: 00:10:24.489 12:55:43 -- accel/accel.sh@20 -- # read -r var val 00:10:26.392 12:55:44 -- accel/accel.sh@21 -- # val= 00:10:26.392 12:55:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.392 12:55:44 -- accel/accel.sh@21 -- # val= 00:10:26.392 12:55:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.392 12:55:44 -- accel/accel.sh@21 -- # val= 00:10:26.392 12:55:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.392 12:55:44 -- accel/accel.sh@21 -- # val= 00:10:26.392 12:55:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.392 12:55:44 -- accel/accel.sh@21 -- # val= 00:10:26.392 12:55:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.392 12:55:44 -- accel/accel.sh@21 -- # val= 00:10:26.392 12:55:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.392 12:55:44 -- accel/accel.sh@20 -- # IFS=: 00:10:26.393 12:55:44 -- accel/accel.sh@20 -- # read -r var val 00:10:26.393 ************************************ 00:10:26.393 END TEST accel_dualcast 00:10:26.393 12:55:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:26.393 12:55:44 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:26.393 12:55:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:26.393 00:10:26.393 real 0m4.722s 00:10:26.393 user 0m4.229s 00:10:26.393 sys 0m0.338s 00:10:26.393 12:55:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.393 12:55:44 -- common/autotest_common.sh@10 -- # set +x 00:10:26.393 ************************************ 00:10:26.393 12:55:44 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:26.393 12:55:44 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:26.393 12:55:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.393 12:55:44 -- common/autotest_common.sh@10 -- # set +x 00:10:26.393 ************************************ 00:10:26.393 START TEST accel_compare 00:10:26.393 ************************************ 00:10:26.393 12:55:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:10:26.393 12:55:44 -- accel/accel.sh@16 -- # local accel_opc 00:10:26.393 12:55:44 -- accel/accel.sh@17 -- # local accel_module 00:10:26.393 12:55:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:26.393 12:55:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:26.393 12:55:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.393 12:55:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.393 12:55:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.393 12:55:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.393 12:55:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.393 12:55:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.393 12:55:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.393 12:55:44 -- accel/accel.sh@42 -- # jq -r . 00:10:26.393 [2024-06-11 12:55:44.917933] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:26.393 [2024-06-11 12:55:44.918281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109255 ] 00:10:26.393 [2024-06-11 12:55:45.085321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.651 [2024-06-11 12:55:45.285797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.555 12:55:47 -- accel/accel.sh@18 -- # out=' 00:10:28.555 SPDK Configuration: 00:10:28.555 Core mask: 0x1 00:10:28.555 00:10:28.555 Accel Perf Configuration: 00:10:28.555 Workload Type: compare 00:10:28.555 Transfer size: 4096 bytes 00:10:28.555 Vector count 1 00:10:28.555 Module: software 00:10:28.555 Queue depth: 32 00:10:28.555 Allocate depth: 32 00:10:28.555 # threads/core: 1 00:10:28.555 Run time: 1 seconds 00:10:28.555 Verify: Yes 00:10:28.555 00:10:28.555 Running for 1 seconds... 00:10:28.555 00:10:28.555 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:28.555 ------------------------------------------------------------------------------------ 00:10:28.555 0,0 447616/s 1748 MiB/s 0 0 00:10:28.555 ==================================================================================== 00:10:28.555 Total 447616/s 1748 MiB/s 0 0' 00:10:28.555 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:28.555 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:28.555 12:55:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:28.555 12:55:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:28.555 12:55:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.555 12:55:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.555 12:55:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.555 12:55:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.555 12:55:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.555 12:55:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.555 12:55:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.555 12:55:47 -- accel/accel.sh@42 -- # jq -r . 00:10:28.555 [2024-06-11 12:55:47.290068] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:28.555 [2024-06-11 12:55:47.290601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109289 ] 00:10:28.814 [2024-06-11 12:55:47.461540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.073 [2024-06-11 12:55:47.682224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.073 12:55:47 -- accel/accel.sh@21 -- # val= 00:10:29.073 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.073 12:55:47 -- accel/accel.sh@21 -- # val= 00:10:29.073 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.073 12:55:47 -- accel/accel.sh@21 -- # val=0x1 00:10:29.073 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.073 12:55:47 -- accel/accel.sh@21 -- # val= 00:10:29.073 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.073 12:55:47 -- accel/accel.sh@21 -- # val= 00:10:29.073 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.073 12:55:47 -- accel/accel.sh@21 -- # val=compare 00:10:29.073 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.073 12:55:47 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.073 12:55:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:29.073 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.073 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val= 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val=software 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@23 -- # accel_module=software 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val=32 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val=32 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val=1 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val=Yes 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val= 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:29.074 12:55:47 -- accel/accel.sh@21 -- # val= 00:10:29.074 12:55:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # IFS=: 00:10:29.074 12:55:47 -- accel/accel.sh@20 -- # read -r var val 00:10:30.978 12:55:49 -- accel/accel.sh@21 -- # val= 00:10:30.978 12:55:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # IFS=: 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # read -r var val 00:10:30.978 12:55:49 -- accel/accel.sh@21 -- # val= 00:10:30.978 12:55:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # IFS=: 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # read -r var val 00:10:30.978 12:55:49 -- accel/accel.sh@21 -- # val= 00:10:30.978 12:55:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # IFS=: 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # read -r var val 00:10:30.978 12:55:49 -- accel/accel.sh@21 -- # val= 00:10:30.978 12:55:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # IFS=: 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # read -r var val 00:10:30.978 12:55:49 -- accel/accel.sh@21 -- # val= 00:10:30.978 12:55:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # IFS=: 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # read -r var val 00:10:30.978 12:55:49 -- accel/accel.sh@21 -- # val= 00:10:30.978 12:55:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # IFS=: 00:10:30.978 12:55:49 -- accel/accel.sh@20 -- # read -r var val 00:10:30.978 12:55:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:30.978 12:55:49 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:30.978 12:55:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.978 00:10:30.978 real 0m4.776s 00:10:30.978 user 0m4.268s 00:10:30.978 sys 0m0.337s 00:10:30.978 12:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.978 ************************************ 00:10:30.978 END TEST accel_compare 00:10:30.978 ************************************ 00:10:30.978 12:55:49 -- common/autotest_common.sh@10 -- # set +x 00:10:30.978 12:55:49 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:30.978 12:55:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:30.978 12:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.978 12:55:49 -- common/autotest_common.sh@10 -- # set +x 00:10:30.978 ************************************ 00:10:30.978 START TEST accel_xor 00:10:30.978 ************************************ 00:10:30.978 12:55:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:30.978 12:55:49 -- accel/accel.sh@16 -- # local accel_opc 00:10:30.978 12:55:49 -- accel/accel.sh@17 -- # local accel_module 00:10:30.978 12:55:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:30.978 12:55:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:30.978 12:55:49 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.978 12:55:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.978 12:55:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.978 12:55:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.978 12:55:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.978 12:55:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.978 12:55:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.978 12:55:49 -- accel/accel.sh@42 -- # jq -r . 00:10:30.978 [2024-06-11 12:55:49.738268] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:30.978 [2024-06-11 12:55:49.738584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109343 ] 00:10:31.248 [2024-06-11 12:55:49.897376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.534 [2024-06-11 12:55:50.099946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.439 12:55:52 -- accel/accel.sh@18 -- # out=' 00:10:33.439 SPDK Configuration: 00:10:33.439 Core mask: 0x1 00:10:33.439 00:10:33.439 Accel Perf Configuration: 00:10:33.439 Workload Type: xor 00:10:33.439 Source buffers: 2 00:10:33.439 Transfer size: 4096 bytes 00:10:33.439 Vector count 1 00:10:33.439 Module: software 00:10:33.439 Queue depth: 32 00:10:33.439 Allocate depth: 32 00:10:33.439 # threads/core: 1 00:10:33.439 Run time: 1 seconds 00:10:33.439 Verify: Yes 00:10:33.439 00:10:33.439 Running for 1 seconds... 00:10:33.439 00:10:33.439 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:33.439 ------------------------------------------------------------------------------------ 00:10:33.439 0,0 229824/s 897 MiB/s 0 0 00:10:33.439 ==================================================================================== 00:10:33.439 Total 229824/s 897 MiB/s 0 0' 00:10:33.439 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.439 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.439 12:55:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:33.439 12:55:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:33.439 12:55:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.439 12:55:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.439 12:55:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.439 12:55:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.439 12:55:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.439 12:55:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.439 12:55:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.439 12:55:52 -- accel/accel.sh@42 -- # jq -r . 00:10:33.439 [2024-06-11 12:55:52.109815] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:33.439 [2024-06-11 12:55:52.110188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109400 ] 00:10:33.696 [2024-06-11 12:55:52.278140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.696 [2024-06-11 12:55:52.468653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val= 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val= 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val=0x1 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val= 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val= 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val=xor 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val=2 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val= 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val=software 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@23 -- # accel_module=software 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val=32 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val=32 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val=1 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val=Yes 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val= 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:33.955 12:55:52 -- accel/accel.sh@21 -- # val= 00:10:33.955 12:55:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # IFS=: 00:10:33.955 12:55:52 -- accel/accel.sh@20 -- # read -r var val 00:10:35.858 12:55:54 -- accel/accel.sh@21 -- # val= 00:10:35.858 12:55:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # IFS=: 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # read -r var val 00:10:35.858 12:55:54 -- accel/accel.sh@21 -- # val= 00:10:35.858 12:55:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # IFS=: 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # read -r var val 00:10:35.858 12:55:54 -- accel/accel.sh@21 -- # val= 00:10:35.858 12:55:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # IFS=: 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # read -r var val 00:10:35.858 12:55:54 -- accel/accel.sh@21 -- # val= 00:10:35.858 12:55:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # IFS=: 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # read -r var val 00:10:35.858 12:55:54 -- accel/accel.sh@21 -- # val= 00:10:35.858 12:55:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # IFS=: 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # read -r var val 00:10:35.858 12:55:54 -- accel/accel.sh@21 -- # val= 00:10:35.858 12:55:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # IFS=: 00:10:35.858 12:55:54 -- accel/accel.sh@20 -- # read -r var val 00:10:35.858 ************************************ 00:10:35.858 END TEST accel_xor 00:10:35.858 ************************************ 00:10:35.858 12:55:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:35.858 12:55:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:35.858 12:55:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.858 00:10:35.858 real 0m4.734s 00:10:35.858 user 0m4.273s 00:10:35.858 sys 0m0.317s 00:10:35.858 12:55:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.858 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.858 12:55:54 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:35.858 12:55:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:35.858 12:55:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.858 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.858 ************************************ 00:10:35.858 START TEST accel_xor 00:10:35.858 ************************************ 00:10:35.858 12:55:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:10:35.858 12:55:54 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.858 12:55:54 -- accel/accel.sh@17 -- # local accel_module 00:10:35.858 12:55:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:35.858 12:55:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:35.858 12:55:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.858 12:55:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.858 12:55:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.858 12:55:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.858 12:55:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.858 12:55:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.858 12:55:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.858 12:55:54 -- accel/accel.sh@42 -- # jq -r . 00:10:35.858 [2024-06-11 12:55:54.535605] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:35.858 [2024-06-11 12:55:54.536018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109448 ] 00:10:36.117 [2024-06-11 12:55:54.703636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.117 [2024-06-11 12:55:54.909331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.649 12:55:56 -- accel/accel.sh@18 -- # out=' 00:10:38.649 SPDK Configuration: 00:10:38.649 Core mask: 0x1 00:10:38.649 00:10:38.649 Accel Perf Configuration: 00:10:38.649 Workload Type: xor 00:10:38.649 Source buffers: 3 00:10:38.649 Transfer size: 4096 bytes 00:10:38.649 Vector count 1 00:10:38.649 Module: software 00:10:38.649 Queue depth: 32 00:10:38.649 Allocate depth: 32 00:10:38.649 # threads/core: 1 00:10:38.649 Run time: 1 seconds 00:10:38.649 Verify: Yes 00:10:38.649 00:10:38.649 Running for 1 seconds... 00:10:38.649 00:10:38.649 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:38.649 ------------------------------------------------------------------------------------ 00:10:38.649 0,0 227904/s 890 MiB/s 0 0 00:10:38.649 ==================================================================================== 00:10:38.649 Total 227904/s 890 MiB/s 0 0' 00:10:38.649 12:55:56 -- accel/accel.sh@20 -- # IFS=: 00:10:38.649 12:55:56 -- accel/accel.sh@20 -- # read -r var val 00:10:38.649 12:55:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:38.649 12:55:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:38.649 12:55:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.649 12:55:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.649 12:55:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.649 12:55:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.649 12:55:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.649 12:55:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.649 12:55:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.649 12:55:56 -- accel/accel.sh@42 -- # jq -r . 00:10:38.649 [2024-06-11 12:55:56.926919] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:38.650 [2024-06-11 12:55:56.927288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109487 ] 00:10:38.650 [2024-06-11 12:55:57.093279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.650 [2024-06-11 12:55:57.292460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.908 12:55:57 -- accel/accel.sh@21 -- # val= 00:10:38.908 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.908 12:55:57 -- accel/accel.sh@21 -- # val= 00:10:38.908 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.908 12:55:57 -- accel/accel.sh@21 -- # val=0x1 00:10:38.908 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.908 12:55:57 -- accel/accel.sh@21 -- # val= 00:10:38.908 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.908 12:55:57 -- accel/accel.sh@21 -- # val= 00:10:38.908 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.908 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.908 12:55:57 -- accel/accel.sh@21 -- # val=xor 00:10:38.908 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.908 12:55:57 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val=3 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val= 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val=software 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@23 -- # accel_module=software 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val=32 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val=32 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val=1 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val=Yes 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val= 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:38.909 12:55:57 -- accel/accel.sh@21 -- # val= 00:10:38.909 12:55:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # IFS=: 00:10:38.909 12:55:57 -- accel/accel.sh@20 -- # read -r var val 00:10:40.811 12:55:59 -- accel/accel.sh@21 -- # val= 00:10:40.811 12:55:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.811 12:55:59 -- accel/accel.sh@21 -- # val= 00:10:40.811 12:55:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.811 12:55:59 -- accel/accel.sh@21 -- # val= 00:10:40.811 12:55:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.811 12:55:59 -- accel/accel.sh@21 -- # val= 00:10:40.811 12:55:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.811 12:55:59 -- accel/accel.sh@21 -- # val= 00:10:40.811 12:55:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.811 12:55:59 -- accel/accel.sh@21 -- # val= 00:10:40.811 12:55:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # IFS=: 00:10:40.811 12:55:59 -- accel/accel.sh@20 -- # read -r var val 00:10:40.811 ************************************ 00:10:40.811 END TEST accel_xor 00:10:40.811 ************************************ 00:10:40.811 12:55:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:40.811 12:55:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:40.811 12:55:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:40.811 00:10:40.811 real 0m4.762s 00:10:40.811 user 0m4.292s 00:10:40.811 sys 0m0.317s 00:10:40.811 12:55:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.811 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:10:40.811 12:55:59 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:40.811 12:55:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:40.811 12:55:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.811 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:10:40.811 ************************************ 00:10:40.811 START TEST accel_dif_verify 00:10:40.811 ************************************ 00:10:40.811 12:55:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:10:40.811 12:55:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:40.811 12:55:59 -- accel/accel.sh@17 -- # local accel_module 00:10:40.811 12:55:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:40.811 12:55:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.811 12:55:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:40.811 12:55:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.811 12:55:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.811 12:55:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.811 12:55:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.811 12:55:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.811 12:55:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.811 12:55:59 -- accel/accel.sh@42 -- # jq -r . 00:10:40.811 [2024-06-11 12:55:59.345028] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:40.811 [2024-06-11 12:55:59.345391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109534 ] 00:10:40.811 [2024-06-11 12:55:59.512690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.069 [2024-06-11 12:55:59.708749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.972 12:56:01 -- accel/accel.sh@18 -- # out=' 00:10:42.972 SPDK Configuration: 00:10:42.972 Core mask: 0x1 00:10:42.972 00:10:42.972 Accel Perf Configuration: 00:10:42.972 Workload Type: dif_verify 00:10:42.972 Vector size: 4096 bytes 00:10:42.972 Transfer size: 4096 bytes 00:10:42.972 Block size: 512 bytes 00:10:42.972 Metadata size: 8 bytes 00:10:42.972 Vector count 1 00:10:42.972 Module: software 00:10:42.972 Queue depth: 32 00:10:42.972 Allocate depth: 32 00:10:42.972 # threads/core: 1 00:10:42.972 Run time: 1 seconds 00:10:42.972 Verify: No 00:10:42.972 00:10:42.972 Running for 1 seconds... 00:10:42.972 00:10:42.972 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:42.972 ------------------------------------------------------------------------------------ 00:10:42.972 0,0 107200/s 425 MiB/s 0 0 00:10:42.972 ==================================================================================== 00:10:42.972 Total 107200/s 418 MiB/s 0 0' 00:10:42.972 12:56:01 -- accel/accel.sh@20 -- # IFS=: 00:10:42.972 12:56:01 -- accel/accel.sh@20 -- # read -r var val 00:10:42.972 12:56:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:42.972 12:56:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:42.972 12:56:01 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.972 12:56:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.972 12:56:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.972 12:56:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.972 12:56:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.972 12:56:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.972 12:56:01 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.972 12:56:01 -- accel/accel.sh@42 -- # jq -r . 00:10:42.972 [2024-06-11 12:56:01.725250] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:42.972 [2024-06-11 12:56:01.725660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109586 ] 00:10:43.242 [2024-06-11 12:56:01.892928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.517 [2024-06-11 12:56:02.093495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val= 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val= 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val=0x1 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val= 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val= 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val=dif_verify 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val= 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.517 12:56:02 -- accel/accel.sh@21 -- # val=software 00:10:43.517 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.517 12:56:02 -- accel/accel.sh@23 -- # accel_module=software 00:10:43.517 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.518 12:56:02 -- accel/accel.sh@21 -- # val=32 00:10:43.518 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.518 12:56:02 -- accel/accel.sh@21 -- # val=32 00:10:43.518 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.518 12:56:02 -- accel/accel.sh@21 -- # val=1 00:10:43.518 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.518 12:56:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:43.518 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.518 12:56:02 -- accel/accel.sh@21 -- # val=No 00:10:43.518 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.518 12:56:02 -- accel/accel.sh@21 -- # val= 00:10:43.518 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:43.518 12:56:02 -- accel/accel.sh@21 -- # val= 00:10:43.518 12:56:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # IFS=: 00:10:43.518 12:56:02 -- accel/accel.sh@20 -- # read -r var val 00:10:45.420 12:56:04 -- accel/accel.sh@21 -- # val= 00:10:45.420 12:56:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.420 12:56:04 -- accel/accel.sh@21 -- # val= 00:10:45.420 12:56:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.420 12:56:04 -- accel/accel.sh@21 -- # val= 00:10:45.420 12:56:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.420 12:56:04 -- accel/accel.sh@21 -- # val= 00:10:45.420 12:56:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.420 12:56:04 -- accel/accel.sh@21 -- # val= 00:10:45.420 12:56:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.420 12:56:04 -- accel/accel.sh@21 -- # val= 00:10:45.420 12:56:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # IFS=: 00:10:45.420 12:56:04 -- accel/accel.sh@20 -- # read -r var val 00:10:45.420 12:56:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:45.420 12:56:04 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:45.420 12:56:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:45.420 00:10:45.420 real 0m4.719s 00:10:45.420 user 0m4.248s 00:10:45.420 sys 0m0.323s 00:10:45.420 12:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.420 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:10:45.420 ************************************ 00:10:45.420 END TEST accel_dif_verify 00:10:45.420 ************************************ 00:10:45.420 12:56:04 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:45.420 12:56:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:45.420 12:56:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.420 12:56:04 -- common/autotest_common.sh@10 -- # set +x 00:10:45.420 ************************************ 00:10:45.420 START TEST accel_dif_generate 00:10:45.420 ************************************ 00:10:45.420 12:56:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:10:45.420 12:56:04 -- accel/accel.sh@16 -- # local accel_opc 00:10:45.420 12:56:04 -- accel/accel.sh@17 -- # local accel_module 00:10:45.420 12:56:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:45.421 12:56:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:45.421 12:56:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.421 12:56:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.421 12:56:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.421 12:56:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.421 12:56:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.421 12:56:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.421 12:56:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.421 12:56:04 -- accel/accel.sh@42 -- # jq -r . 00:10:45.421 [2024-06-11 12:56:04.105646] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:45.421 [2024-06-11 12:56:04.105848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109640 ] 00:10:45.679 [2024-06-11 12:56:04.262254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.679 [2024-06-11 12:56:04.474085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.210 12:56:06 -- accel/accel.sh@18 -- # out=' 00:10:48.210 SPDK Configuration: 00:10:48.210 Core mask: 0x1 00:10:48.210 00:10:48.210 Accel Perf Configuration: 00:10:48.210 Workload Type: dif_generate 00:10:48.210 Vector size: 4096 bytes 00:10:48.210 Transfer size: 4096 bytes 00:10:48.210 Block size: 512 bytes 00:10:48.210 Metadata size: 8 bytes 00:10:48.210 Vector count 1 00:10:48.210 Module: software 00:10:48.210 Queue depth: 32 00:10:48.210 Allocate depth: 32 00:10:48.210 # threads/core: 1 00:10:48.210 Run time: 1 seconds 00:10:48.210 Verify: No 00:10:48.210 00:10:48.210 Running for 1 seconds... 00:10:48.210 00:10:48.210 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:48.210 ------------------------------------------------------------------------------------ 00:10:48.210 0,0 132384/s 525 MiB/s 0 0 00:10:48.210 ==================================================================================== 00:10:48.210 Total 132384/s 517 MiB/s 0 0' 00:10:48.210 12:56:06 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:06 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:48.210 12:56:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:48.210 12:56:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.210 12:56:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.210 12:56:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.210 12:56:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.210 12:56:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.210 12:56:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.210 12:56:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.210 12:56:06 -- accel/accel.sh@42 -- # jq -r . 00:10:48.210 [2024-06-11 12:56:06.457991] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:48.210 [2024-06-11 12:56:06.458180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109679 ] 00:10:48.210 [2024-06-11 12:56:06.613409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.210 [2024-06-11 12:56:06.818368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val= 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val= 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val=0x1 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val= 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val= 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val=dif_generate 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val= 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val=software 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@23 -- # accel_module=software 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val=32 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val=32 00:10:48.210 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.210 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.210 12:56:07 -- accel/accel.sh@21 -- # val=1 00:10:48.211 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.211 12:56:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:48.211 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.211 12:56:07 -- accel/accel.sh@21 -- # val=No 00:10:48.211 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.211 12:56:07 -- accel/accel.sh@21 -- # val= 00:10:48.211 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:48.211 12:56:07 -- accel/accel.sh@21 -- # val= 00:10:48.211 12:56:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # IFS=: 00:10:48.211 12:56:07 -- accel/accel.sh@20 -- # read -r var val 00:10:50.111 12:56:08 -- accel/accel.sh@21 -- # val= 00:10:50.111 12:56:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.111 12:56:08 -- accel/accel.sh@20 -- # IFS=: 00:10:50.111 12:56:08 -- accel/accel.sh@20 -- # read -r var val 00:10:50.111 12:56:08 -- accel/accel.sh@21 -- # val= 00:10:50.111 12:56:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.111 12:56:08 -- accel/accel.sh@20 -- # IFS=: 00:10:50.111 12:56:08 -- accel/accel.sh@20 -- # read -r var val 00:10:50.111 12:56:08 -- accel/accel.sh@21 -- # val= 00:10:50.111 12:56:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.111 12:56:08 -- accel/accel.sh@20 -- # IFS=: 00:10:50.111 12:56:08 -- accel/accel.sh@20 -- # read -r var val 00:10:50.111 12:56:08 -- accel/accel.sh@21 -- # val= 00:10:50.111 12:56:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.111 12:56:08 -- accel/accel.sh@20 -- # IFS=: 00:10:50.112 12:56:08 -- accel/accel.sh@20 -- # read -r var val 00:10:50.112 12:56:08 -- accel/accel.sh@21 -- # val= 00:10:50.112 12:56:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.112 12:56:08 -- accel/accel.sh@20 -- # IFS=: 00:10:50.112 12:56:08 -- accel/accel.sh@20 -- # read -r var val 00:10:50.112 12:56:08 -- accel/accel.sh@21 -- # val= 00:10:50.112 12:56:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.112 12:56:08 -- accel/accel.sh@20 -- # IFS=: 00:10:50.112 12:56:08 -- accel/accel.sh@20 -- # read -r var val 00:10:50.112 12:56:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:50.112 12:56:08 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:50.112 12:56:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:50.112 00:10:50.112 real 0m4.712s 00:10:50.112 user 0m4.195s 00:10:50.112 sys 0m0.360s 00:10:50.112 12:56:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.112 12:56:08 -- common/autotest_common.sh@10 -- # set +x 00:10:50.112 ************************************ 00:10:50.112 END TEST accel_dif_generate 00:10:50.112 ************************************ 00:10:50.112 12:56:08 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:50.112 12:56:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:50.112 12:56:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:50.112 12:56:08 -- common/autotest_common.sh@10 -- # set +x 00:10:50.112 ************************************ 00:10:50.112 START TEST accel_dif_generate_copy 00:10:50.112 ************************************ 00:10:50.112 12:56:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:50.112 12:56:08 -- accel/accel.sh@16 -- # local accel_opc 00:10:50.112 12:56:08 -- accel/accel.sh@17 -- # local accel_module 00:10:50.112 12:56:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:50.112 12:56:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:50.112 12:56:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.112 12:56:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.112 12:56:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.112 12:56:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.112 12:56:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.112 12:56:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.112 12:56:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.112 12:56:08 -- accel/accel.sh@42 -- # jq -r . 00:10:50.112 [2024-06-11 12:56:08.862488] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:50.112 [2024-06-11 12:56:08.862641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109727 ] 00:10:50.371 [2024-06-11 12:56:09.018238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.371 [2024-06-11 12:56:09.199433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.901 12:56:11 -- accel/accel.sh@18 -- # out=' 00:10:52.901 SPDK Configuration: 00:10:52.901 Core mask: 0x1 00:10:52.901 00:10:52.901 Accel Perf Configuration: 00:10:52.901 Workload Type: dif_generate_copy 00:10:52.901 Vector size: 4096 bytes 00:10:52.901 Transfer size: 4096 bytes 00:10:52.901 Vector count 1 00:10:52.901 Module: software 00:10:52.901 Queue depth: 32 00:10:52.901 Allocate depth: 32 00:10:52.901 # threads/core: 1 00:10:52.901 Run time: 1 seconds 00:10:52.901 Verify: No 00:10:52.901 00:10:52.901 Running for 1 seconds... 00:10:52.901 00:10:52.901 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:52.901 ------------------------------------------------------------------------------------ 00:10:52.901 0,0 103968/s 412 MiB/s 0 0 00:10:52.901 ==================================================================================== 00:10:52.901 Total 103968/s 406 MiB/s 0 0' 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.901 12:56:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:52.901 12:56:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.901 12:56:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:52.901 12:56:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.901 12:56:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.901 12:56:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.901 12:56:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.901 12:56:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.901 12:56:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.901 12:56:11 -- accel/accel.sh@42 -- # jq -r . 00:10:52.901 [2024-06-11 12:56:11.202639] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:52.901 [2024-06-11 12:56:11.202837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109780 ] 00:10:52.901 [2024-06-11 12:56:11.353720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.901 [2024-06-11 12:56:11.533872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.901 12:56:11 -- accel/accel.sh@21 -- # val= 00:10:52.901 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.901 12:56:11 -- accel/accel.sh@21 -- # val= 00:10:52.901 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.901 12:56:11 -- accel/accel.sh@21 -- # val=0x1 00:10:52.901 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.901 12:56:11 -- accel/accel.sh@21 -- # val= 00:10:52.901 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.901 12:56:11 -- accel/accel.sh@21 -- # val= 00:10:52.901 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.901 12:56:11 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:52.901 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.901 12:56:11 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:52.901 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val= 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val=software 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@23 -- # accel_module=software 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val=32 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val=32 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val=1 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val=No 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val= 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:52.902 12:56:11 -- accel/accel.sh@21 -- # val= 00:10:52.902 12:56:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # IFS=: 00:10:52.902 12:56:11 -- accel/accel.sh@20 -- # read -r var val 00:10:54.807 12:56:13 -- accel/accel.sh@21 -- # val= 00:10:54.807 12:56:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.807 12:56:13 -- accel/accel.sh@21 -- # val= 00:10:54.807 12:56:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.807 12:56:13 -- accel/accel.sh@21 -- # val= 00:10:54.807 12:56:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.807 12:56:13 -- accel/accel.sh@21 -- # val= 00:10:54.807 12:56:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.807 12:56:13 -- accel/accel.sh@21 -- # val= 00:10:54.807 12:56:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.807 12:56:13 -- accel/accel.sh@21 -- # val= 00:10:54.807 12:56:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # IFS=: 00:10:54.807 12:56:13 -- accel/accel.sh@20 -- # read -r var val 00:10:54.807 12:56:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:54.807 12:56:13 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:54.807 12:56:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:54.807 00:10:54.807 real 0m4.650s 00:10:54.807 user 0m4.172s 00:10:54.807 sys 0m0.327s 00:10:54.807 12:56:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.807 12:56:13 -- common/autotest_common.sh@10 -- # set +x 00:10:54.807 ************************************ 00:10:54.807 END TEST accel_dif_generate_copy 00:10:54.807 ************************************ 00:10:54.807 12:56:13 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:54.807 12:56:13 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.807 12:56:13 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:54.807 12:56:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:54.807 12:56:13 -- common/autotest_common.sh@10 -- # set +x 00:10:54.807 ************************************ 00:10:54.807 START TEST accel_comp 00:10:54.807 ************************************ 00:10:54.807 12:56:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.807 12:56:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:54.807 12:56:13 -- accel/accel.sh@17 -- # local accel_module 00:10:54.807 12:56:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.807 12:56:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.807 12:56:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.807 12:56:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.807 12:56:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.807 12:56:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.807 12:56:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.807 12:56:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.807 12:56:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.807 12:56:13 -- accel/accel.sh@42 -- # jq -r . 00:10:54.807 [2024-06-11 12:56:13.574252] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:54.807 [2024-06-11 12:56:13.574444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109833 ] 00:10:55.068 [2024-06-11 12:56:13.740634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.330 [2024-06-11 12:56:13.925266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.234 12:56:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:57.234 00:10:57.234 SPDK Configuration: 00:10:57.234 Core mask: 0x1 00:10:57.234 00:10:57.234 Accel Perf Configuration: 00:10:57.234 Workload Type: compress 00:10:57.234 Transfer size: 4096 bytes 00:10:57.234 Vector count 1 00:10:57.234 Module: software 00:10:57.234 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.234 Queue depth: 32 00:10:57.234 Allocate depth: 32 00:10:57.234 # threads/core: 1 00:10:57.234 Run time: 1 seconds 00:10:57.234 Verify: No 00:10:57.234 00:10:57.234 Running for 1 seconds... 00:10:57.234 00:10:57.234 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:57.234 ------------------------------------------------------------------------------------ 00:10:57.234 0,0 56512/s 235 MiB/s 0 0 00:10:57.234 ==================================================================================== 00:10:57.234 Total 56512/s 220 MiB/s 0 0' 00:10:57.234 12:56:15 -- accel/accel.sh@20 -- # IFS=: 00:10:57.234 12:56:15 -- accel/accel.sh@20 -- # read -r var val 00:10:57.234 12:56:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.234 12:56:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.234 12:56:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.234 12:56:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.234 12:56:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.234 12:56:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.234 12:56:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.234 12:56:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.234 12:56:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.234 12:56:15 -- accel/accel.sh@42 -- # jq -r . 00:10:57.234 [2024-06-11 12:56:15.917831] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:57.234 [2024-06-11 12:56:15.918669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109869 ] 00:10:57.493 [2024-06-11 12:56:16.082054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.493 [2024-06-11 12:56:16.270841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.751 12:56:16 -- accel/accel.sh@21 -- # val= 00:10:57.751 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.751 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.751 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.751 12:56:16 -- accel/accel.sh@21 -- # val= 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val= 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val=0x1 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val= 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val= 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val=compress 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val= 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val=software 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@23 -- # accel_module=software 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val=32 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val=32 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val=1 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val=No 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val= 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:57.752 12:56:16 -- accel/accel.sh@21 -- # val= 00:10:57.752 12:56:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # IFS=: 00:10:57.752 12:56:16 -- accel/accel.sh@20 -- # read -r var val 00:10:59.656 12:56:18 -- accel/accel.sh@21 -- # val= 00:10:59.656 12:56:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.656 12:56:18 -- accel/accel.sh@21 -- # val= 00:10:59.656 12:56:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.656 12:56:18 -- accel/accel.sh@21 -- # val= 00:10:59.656 12:56:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.656 12:56:18 -- accel/accel.sh@21 -- # val= 00:10:59.656 12:56:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.656 12:56:18 -- accel/accel.sh@21 -- # val= 00:10:59.656 12:56:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.656 12:56:18 -- accel/accel.sh@21 -- # val= 00:10:59.656 12:56:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # IFS=: 00:10:59.656 12:56:18 -- accel/accel.sh@20 -- # read -r var val 00:10:59.656 12:56:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:59.656 12:56:18 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:59.656 12:56:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:59.656 ************************************ 00:10:59.656 END TEST accel_comp 00:10:59.656 00:10:59.656 real 0m4.690s 00:10:59.656 user 0m4.188s 00:10:59.656 sys 0m0.331s 00:10:59.656 12:56:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.656 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:10:59.656 ************************************ 00:10:59.657 12:56:18 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:59.657 12:56:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:59.657 12:56:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.657 12:56:18 -- common/autotest_common.sh@10 -- # set +x 00:10:59.657 ************************************ 00:10:59.657 START TEST accel_decomp 00:10:59.657 ************************************ 00:10:59.657 12:56:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:59.657 12:56:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:59.657 12:56:18 -- accel/accel.sh@17 -- # local accel_module 00:10:59.657 12:56:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:59.657 12:56:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:59.657 12:56:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:59.657 12:56:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.657 12:56:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.657 12:56:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.657 12:56:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.657 12:56:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.657 12:56:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.657 12:56:18 -- accel/accel.sh@42 -- # jq -r . 00:10:59.657 [2024-06-11 12:56:18.305724] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:59.657 [2024-06-11 12:56:18.306005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109923 ] 00:10:59.657 [2024-06-11 12:56:18.456995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.916 [2024-06-11 12:56:18.656343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.818 12:56:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:01.818 00:11:01.818 SPDK Configuration: 00:11:01.818 Core mask: 0x1 00:11:01.818 00:11:01.818 Accel Perf Configuration: 00:11:01.818 Workload Type: decompress 00:11:01.818 Transfer size: 4096 bytes 00:11:01.818 Vector count 1 00:11:01.818 Module: software 00:11:01.818 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:01.818 Queue depth: 32 00:11:01.818 Allocate depth: 32 00:11:01.818 # threads/core: 1 00:11:01.818 Run time: 1 seconds 00:11:01.818 Verify: Yes 00:11:01.818 00:11:01.818 Running for 1 seconds... 00:11:01.818 00:11:01.818 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:01.818 ------------------------------------------------------------------------------------ 00:11:01.818 0,0 71840/s 132 MiB/s 0 0 00:11:01.818 ==================================================================================== 00:11:01.818 Total 71840/s 280 MiB/s 0 0' 00:11:01.818 12:56:20 -- accel/accel.sh@20 -- # IFS=: 00:11:01.818 12:56:20 -- accel/accel.sh@20 -- # read -r var val 00:11:01.818 12:56:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:01.818 12:56:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:01.818 12:56:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.818 12:56:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.818 12:56:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.818 12:56:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.818 12:56:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.818 12:56:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.818 12:56:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.818 12:56:20 -- accel/accel.sh@42 -- # jq -r . 00:11:01.818 [2024-06-11 12:56:20.627708] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:01.818 [2024-06-11 12:56:20.627933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109957 ] 00:11:02.077 [2024-06-11 12:56:20.792150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.336 [2024-06-11 12:56:20.973712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val= 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val= 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val= 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val=0x1 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val= 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val= 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val=decompress 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val= 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val=software 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@23 -- # accel_module=software 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val=32 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val=32 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val=1 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val=Yes 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val= 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:02.336 12:56:21 -- accel/accel.sh@21 -- # val= 00:11:02.336 12:56:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # IFS=: 00:11:02.336 12:56:21 -- accel/accel.sh@20 -- # read -r var val 00:11:04.240 12:56:22 -- accel/accel.sh@21 -- # val= 00:11:04.240 12:56:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.240 12:56:22 -- accel/accel.sh@21 -- # val= 00:11:04.240 12:56:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.240 12:56:22 -- accel/accel.sh@21 -- # val= 00:11:04.240 12:56:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.240 12:56:22 -- accel/accel.sh@21 -- # val= 00:11:04.240 12:56:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.240 12:56:22 -- accel/accel.sh@21 -- # val= 00:11:04.240 12:56:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.240 12:56:22 -- accel/accel.sh@21 -- # val= 00:11:04.240 12:56:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # IFS=: 00:11:04.240 12:56:22 -- accel/accel.sh@20 -- # read -r var val 00:11:04.240 12:56:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:04.240 12:56:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:04.240 12:56:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:04.240 ************************************ 00:11:04.240 END TEST accel_decomp 00:11:04.240 ************************************ 00:11:04.240 00:11:04.240 real 0m4.623s 00:11:04.240 user 0m4.130s 00:11:04.240 sys 0m0.335s 00:11:04.240 12:56:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.240 12:56:22 -- common/autotest_common.sh@10 -- # set +x 00:11:04.240 12:56:22 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:04.240 12:56:22 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:04.240 12:56:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:04.240 12:56:22 -- common/autotest_common.sh@10 -- # set +x 00:11:04.240 ************************************ 00:11:04.240 START TEST accel_decmop_full 00:11:04.240 ************************************ 00:11:04.240 12:56:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:04.240 12:56:22 -- accel/accel.sh@16 -- # local accel_opc 00:11:04.240 12:56:22 -- accel/accel.sh@17 -- # local accel_module 00:11:04.240 12:56:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:04.240 12:56:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:04.240 12:56:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:04.240 12:56:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:04.240 12:56:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:04.240 12:56:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:04.240 12:56:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:04.240 12:56:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:04.240 12:56:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:04.240 12:56:22 -- accel/accel.sh@42 -- # jq -r . 00:11:04.240 [2024-06-11 12:56:22.984706] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:04.240 [2024-06-11 12:56:22.984893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110026 ] 00:11:04.499 [2024-06-11 12:56:23.153077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.758 [2024-06-11 12:56:23.345492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.664 12:56:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:06.664 00:11:06.664 SPDK Configuration: 00:11:06.664 Core mask: 0x1 00:11:06.664 00:11:06.664 Accel Perf Configuration: 00:11:06.664 Workload Type: decompress 00:11:06.664 Transfer size: 111250 bytes 00:11:06.664 Vector count 1 00:11:06.664 Module: software 00:11:06.664 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:06.664 Queue depth: 32 00:11:06.664 Allocate depth: 32 00:11:06.664 # threads/core: 1 00:11:06.664 Run time: 1 seconds 00:11:06.664 Verify: Yes 00:11:06.664 00:11:06.664 Running for 1 seconds... 00:11:06.664 00:11:06.664 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:06.664 ------------------------------------------------------------------------------------ 00:11:06.664 0,0 5248/s 216 MiB/s 0 0 00:11:06.664 ==================================================================================== 00:11:06.664 Total 5248/s 556 MiB/s 0 0' 00:11:06.664 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:06.664 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:06.664 12:56:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:06.664 12:56:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:06.664 12:56:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.664 12:56:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.664 12:56:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.664 12:56:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.664 12:56:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.664 12:56:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.664 12:56:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.664 12:56:25 -- accel/accel.sh@42 -- # jq -r . 00:11:06.664 [2024-06-11 12:56:25.356333] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:06.664 [2024-06-11 12:56:25.356537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110066 ] 00:11:06.922 [2024-06-11 12:56:25.524188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.922 [2024-06-11 12:56:25.708855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.181 12:56:25 -- accel/accel.sh@21 -- # val= 00:11:07.181 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.181 12:56:25 -- accel/accel.sh@21 -- # val= 00:11:07.181 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.181 12:56:25 -- accel/accel.sh@21 -- # val= 00:11:07.181 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.181 12:56:25 -- accel/accel.sh@21 -- # val=0x1 00:11:07.181 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.181 12:56:25 -- accel/accel.sh@21 -- # val= 00:11:07.181 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.181 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val= 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val=decompress 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val= 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val=software 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@23 -- # accel_module=software 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val=32 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val=32 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val=1 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val=Yes 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val= 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:07.182 12:56:25 -- accel/accel.sh@21 -- # val= 00:11:07.182 12:56:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # IFS=: 00:11:07.182 12:56:25 -- accel/accel.sh@20 -- # read -r var val 00:11:09.086 12:56:27 -- accel/accel.sh@21 -- # val= 00:11:09.086 12:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # IFS=: 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # read -r var val 00:11:09.086 12:56:27 -- accel/accel.sh@21 -- # val= 00:11:09.086 12:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # IFS=: 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # read -r var val 00:11:09.086 12:56:27 -- accel/accel.sh@21 -- # val= 00:11:09.086 12:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # IFS=: 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # read -r var val 00:11:09.086 12:56:27 -- accel/accel.sh@21 -- # val= 00:11:09.086 12:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # IFS=: 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # read -r var val 00:11:09.086 12:56:27 -- accel/accel.sh@21 -- # val= 00:11:09.086 12:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # IFS=: 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # read -r var val 00:11:09.086 12:56:27 -- accel/accel.sh@21 -- # val= 00:11:09.086 12:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # IFS=: 00:11:09.086 12:56:27 -- accel/accel.sh@20 -- # read -r var val 00:11:09.086 12:56:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:09.086 12:56:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:09.086 12:56:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:09.086 00:11:09.086 real 0m4.727s 00:11:09.086 user 0m4.210s 00:11:09.086 sys 0m0.370s 00:11:09.086 12:56:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.086 ************************************ 00:11:09.086 END TEST accel_decmop_full 00:11:09.086 ************************************ 00:11:09.086 12:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:09.086 12:56:27 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:09.086 12:56:27 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:09.086 12:56:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.086 12:56:27 -- common/autotest_common.sh@10 -- # set +x 00:11:09.086 ************************************ 00:11:09.086 START TEST accel_decomp_mcore 00:11:09.086 ************************************ 00:11:09.086 12:56:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:09.086 12:56:27 -- accel/accel.sh@16 -- # local accel_opc 00:11:09.086 12:56:27 -- accel/accel.sh@17 -- # local accel_module 00:11:09.087 12:56:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:09.087 12:56:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:09.087 12:56:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:09.087 12:56:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:09.087 12:56:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:09.087 12:56:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:09.087 12:56:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:09.087 12:56:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:09.087 12:56:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:09.087 12:56:27 -- accel/accel.sh@42 -- # jq -r . 00:11:09.087 [2024-06-11 12:56:27.756954] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:09.087 [2024-06-11 12:56:27.757137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110113 ] 00:11:09.346 [2024-06-11 12:56:27.930183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.346 [2024-06-11 12:56:28.114591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.346 [2024-06-11 12:56:28.114726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.346 [2024-06-11 12:56:28.114878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.346 [2024-06-11 12:56:28.114885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.879 12:56:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:11.879 00:11:11.879 SPDK Configuration: 00:11:11.879 Core mask: 0xf 00:11:11.879 00:11:11.879 Accel Perf Configuration: 00:11:11.879 Workload Type: decompress 00:11:11.879 Transfer size: 4096 bytes 00:11:11.879 Vector count 1 00:11:11.879 Module: software 00:11:11.879 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:11.879 Queue depth: 32 00:11:11.879 Allocate depth: 32 00:11:11.879 # threads/core: 1 00:11:11.879 Run time: 1 seconds 00:11:11.879 Verify: Yes 00:11:11.879 00:11:11.879 Running for 1 seconds... 00:11:11.879 00:11:11.879 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:11.879 ------------------------------------------------------------------------------------ 00:11:11.879 0,0 53344/s 98 MiB/s 0 0 00:11:11.879 3,0 53376/s 98 MiB/s 0 0 00:11:11.879 2,0 52096/s 95 MiB/s 0 0 00:11:11.879 1,0 52992/s 97 MiB/s 0 0 00:11:11.879 ==================================================================================== 00:11:11.879 Total 211808/s 827 MiB/s 0 0' 00:11:11.879 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:11.879 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:11.879 12:56:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:11.879 12:56:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:11.879 12:56:30 -- accel/accel.sh@12 -- # build_accel_config 00:11:11.879 12:56:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:11.879 12:56:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.879 12:56:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.879 12:56:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:11.879 12:56:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:11.879 12:56:30 -- accel/accel.sh@41 -- # local IFS=, 00:11:11.879 12:56:30 -- accel/accel.sh@42 -- # jq -r . 00:11:11.879 [2024-06-11 12:56:30.196115] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:11.879 [2024-06-11 12:56:30.196308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110150 ] 00:11:11.879 [2024-06-11 12:56:30.381551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.879 [2024-06-11 12:56:30.578546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.879 [2024-06-11 12:56:30.578677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.879 [2024-06-11 12:56:30.578812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.879 [2024-06-11 12:56:30.578801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val= 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val= 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val= 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val=0xf 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val= 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val= 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val=decompress 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val= 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val=software 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@23 -- # accel_module=software 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.137 12:56:30 -- accel/accel.sh@21 -- # val=32 00:11:12.137 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.137 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.138 12:56:30 -- accel/accel.sh@21 -- # val=32 00:11:12.138 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.138 12:56:30 -- accel/accel.sh@21 -- # val=1 00:11:12.138 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.138 12:56:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:12.138 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.138 12:56:30 -- accel/accel.sh@21 -- # val=Yes 00:11:12.138 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.138 12:56:30 -- accel/accel.sh@21 -- # val= 00:11:12.138 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:12.138 12:56:30 -- accel/accel.sh@21 -- # val= 00:11:12.138 12:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # IFS=: 00:11:12.138 12:56:30 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@21 -- # val= 00:11:14.037 12:56:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # IFS=: 00:11:14.037 12:56:32 -- accel/accel.sh@20 -- # read -r var val 00:11:14.037 12:56:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:14.037 12:56:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:14.037 12:56:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:14.037 ************************************ 00:11:14.037 END TEST accel_decomp_mcore 00:11:14.037 ************************************ 00:11:14.037 00:11:14.037 real 0m4.894s 00:11:14.037 user 0m14.399s 00:11:14.037 sys 0m0.447s 00:11:14.037 12:56:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.037 12:56:32 -- common/autotest_common.sh@10 -- # set +x 00:11:14.037 12:56:32 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:14.037 12:56:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:14.037 12:56:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.037 12:56:32 -- common/autotest_common.sh@10 -- # set +x 00:11:14.037 ************************************ 00:11:14.037 START TEST accel_decomp_full_mcore 00:11:14.037 ************************************ 00:11:14.037 12:56:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:14.037 12:56:32 -- accel/accel.sh@16 -- # local accel_opc 00:11:14.037 12:56:32 -- accel/accel.sh@17 -- # local accel_module 00:11:14.037 12:56:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:14.037 12:56:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:14.037 12:56:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:14.037 12:56:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:14.037 12:56:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.037 12:56:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.037 12:56:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:14.037 12:56:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:14.037 12:56:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:14.037 12:56:32 -- accel/accel.sh@42 -- # jq -r . 00:11:14.037 [2024-06-11 12:56:32.708987] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:14.037 [2024-06-11 12:56:32.709170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110222 ] 00:11:14.295 [2024-06-11 12:56:32.894203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.295 [2024-06-11 12:56:33.099682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.295 [2024-06-11 12:56:33.099814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.295 [2024-06-11 12:56:33.099913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.295 [2024-06-11 12:56:33.099918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.836 12:56:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:16.836 00:11:16.836 SPDK Configuration: 00:11:16.836 Core mask: 0xf 00:11:16.836 00:11:16.836 Accel Perf Configuration: 00:11:16.836 Workload Type: decompress 00:11:16.836 Transfer size: 111250 bytes 00:11:16.836 Vector count 1 00:11:16.836 Module: software 00:11:16.836 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:16.836 Queue depth: 32 00:11:16.836 Allocate depth: 32 00:11:16.836 # threads/core: 1 00:11:16.836 Run time: 1 seconds 00:11:16.836 Verify: Yes 00:11:16.836 00:11:16.836 Running for 1 seconds... 00:11:16.836 00:11:16.836 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:16.836 ------------------------------------------------------------------------------------ 00:11:16.836 0,0 4704/s 194 MiB/s 0 0 00:11:16.836 3,0 4704/s 194 MiB/s 0 0 00:11:16.836 2,0 5120/s 211 MiB/s 0 0 00:11:16.836 1,0 4704/s 194 MiB/s 0 0 00:11:16.836 ==================================================================================== 00:11:16.836 Total 19232/s 2040 MiB/s 0 0' 00:11:16.836 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:16.836 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:16.836 12:56:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:16.836 12:56:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:16.836 12:56:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.836 12:56:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:16.836 12:56:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.836 12:56:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.836 12:56:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:16.836 12:56:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:16.836 12:56:35 -- accel/accel.sh@41 -- # local IFS=, 00:11:16.836 12:56:35 -- accel/accel.sh@42 -- # jq -r . 00:11:16.836 [2024-06-11 12:56:35.190382] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:16.836 [2024-06-11 12:56:35.190564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110263 ] 00:11:16.836 [2024-06-11 12:56:35.376589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.836 [2024-06-11 12:56:35.585476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.836 [2024-06-11 12:56:35.585591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.836 [2024-06-11 12:56:35.585701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.836 [2024-06-11 12:56:35.585702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.094 12:56:35 -- accel/accel.sh@21 -- # val= 00:11:17.094 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.094 12:56:35 -- accel/accel.sh@21 -- # val= 00:11:17.094 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.094 12:56:35 -- accel/accel.sh@21 -- # val= 00:11:17.094 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.094 12:56:35 -- accel/accel.sh@21 -- # val=0xf 00:11:17.094 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.094 12:56:35 -- accel/accel.sh@21 -- # val= 00:11:17.094 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.094 12:56:35 -- accel/accel.sh@21 -- # val= 00:11:17.094 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.094 12:56:35 -- accel/accel.sh@21 -- # val=decompress 00:11:17.094 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.094 12:56:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.094 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val= 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val=software 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@23 -- # accel_module=software 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val=32 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val=32 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val=1 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val=Yes 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val= 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:17.095 12:56:35 -- accel/accel.sh@21 -- # val= 00:11:17.095 12:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # IFS=: 00:11:17.095 12:56:35 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@21 -- # val= 00:11:19.020 12:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # IFS=: 00:11:19.020 12:56:37 -- accel/accel.sh@20 -- # read -r var val 00:11:19.020 12:56:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:19.020 12:56:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:19.020 12:56:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:19.020 ************************************ 00:11:19.020 END TEST accel_decomp_full_mcore 00:11:19.020 ************************************ 00:11:19.020 00:11:19.020 real 0m4.969s 00:11:19.020 user 0m14.596s 00:11:19.020 sys 0m0.393s 00:11:19.020 12:56:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.020 12:56:37 -- common/autotest_common.sh@10 -- # set +x 00:11:19.020 12:56:37 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:19.020 12:56:37 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:19.020 12:56:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.020 12:56:37 -- common/autotest_common.sh@10 -- # set +x 00:11:19.020 ************************************ 00:11:19.020 START TEST accel_decomp_mthread 00:11:19.020 ************************************ 00:11:19.020 12:56:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:19.020 12:56:37 -- accel/accel.sh@16 -- # local accel_opc 00:11:19.020 12:56:37 -- accel/accel.sh@17 -- # local accel_module 00:11:19.020 12:56:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:19.020 12:56:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:19.021 12:56:37 -- accel/accel.sh@12 -- # build_accel_config 00:11:19.021 12:56:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:19.021 12:56:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.021 12:56:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.021 12:56:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:19.021 12:56:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:19.021 12:56:37 -- accel/accel.sh@41 -- # local IFS=, 00:11:19.021 12:56:37 -- accel/accel.sh@42 -- # jq -r . 00:11:19.021 [2024-06-11 12:56:37.731108] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:19.021 [2024-06-11 12:56:37.731335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110321 ] 00:11:19.279 [2024-06-11 12:56:37.900787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.279 [2024-06-11 12:56:38.091058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.810 12:56:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:21.810 00:11:21.810 SPDK Configuration: 00:11:21.810 Core mask: 0x1 00:11:21.810 00:11:21.810 Accel Perf Configuration: 00:11:21.810 Workload Type: decompress 00:11:21.811 Transfer size: 4096 bytes 00:11:21.811 Vector count 1 00:11:21.811 Module: software 00:11:21.811 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.811 Queue depth: 32 00:11:21.811 Allocate depth: 32 00:11:21.811 # threads/core: 2 00:11:21.811 Run time: 1 seconds 00:11:21.811 Verify: Yes 00:11:21.811 00:11:21.811 Running for 1 seconds... 00:11:21.811 00:11:21.811 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:21.811 ------------------------------------------------------------------------------------ 00:11:21.811 0,1 34880/s 64 MiB/s 0 0 00:11:21.811 0,0 34784/s 64 MiB/s 0 0 00:11:21.811 ==================================================================================== 00:11:21.811 Total 69664/s 272 MiB/s 0 0' 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:21.811 12:56:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:21.811 12:56:40 -- accel/accel.sh@12 -- # build_accel_config 00:11:21.811 12:56:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:21.811 12:56:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.811 12:56:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.811 12:56:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:21.811 12:56:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:21.811 12:56:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:21.811 12:56:40 -- accel/accel.sh@42 -- # jq -r . 00:11:21.811 [2024-06-11 12:56:40.089646] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:21.811 [2024-06-11 12:56:40.089849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110355 ] 00:11:21.811 [2024-06-11 12:56:40.250729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.811 [2024-06-11 12:56:40.450696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val= 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val= 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val= 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val=0x1 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val= 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val= 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val=decompress 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val= 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val=software 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@23 -- # accel_module=software 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val=32 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val=32 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val=2 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val=Yes 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val= 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:21.811 12:56:40 -- accel/accel.sh@21 -- # val= 00:11:21.811 12:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # IFS=: 00:11:21.811 12:56:40 -- accel/accel.sh@20 -- # read -r var val 00:11:23.716 12:56:42 -- accel/accel.sh@21 -- # val= 00:11:23.716 12:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # IFS=: 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # read -r var val 00:11:23.716 12:56:42 -- accel/accel.sh@21 -- # val= 00:11:23.716 12:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # IFS=: 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # read -r var val 00:11:23.716 12:56:42 -- accel/accel.sh@21 -- # val= 00:11:23.716 12:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # IFS=: 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # read -r var val 00:11:23.716 12:56:42 -- accel/accel.sh@21 -- # val= 00:11:23.716 12:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # IFS=: 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # read -r var val 00:11:23.716 12:56:42 -- accel/accel.sh@21 -- # val= 00:11:23.716 12:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # IFS=: 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # read -r var val 00:11:23.716 12:56:42 -- accel/accel.sh@21 -- # val= 00:11:23.716 12:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # IFS=: 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # read -r var val 00:11:23.716 12:56:42 -- accel/accel.sh@21 -- # val= 00:11:23.716 12:56:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # IFS=: 00:11:23.716 12:56:42 -- accel/accel.sh@20 -- # read -r var val 00:11:23.716 12:56:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:23.716 12:56:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:23.716 12:56:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:23.716 ************************************ 00:11:23.716 END TEST accel_decomp_mthread 00:11:23.716 ************************************ 00:11:23.716 00:11:23.716 real 0m4.745s 00:11:23.716 user 0m4.258s 00:11:23.716 sys 0m0.347s 00:11:23.716 12:56:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.716 12:56:42 -- common/autotest_common.sh@10 -- # set +x 00:11:23.716 12:56:42 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:23.716 12:56:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:23.716 12:56:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:23.716 12:56:42 -- common/autotest_common.sh@10 -- # set +x 00:11:23.716 ************************************ 00:11:23.716 START TEST accel_deomp_full_mthread 00:11:23.716 ************************************ 00:11:23.716 12:56:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:23.716 12:56:42 -- accel/accel.sh@16 -- # local accel_opc 00:11:23.716 12:56:42 -- accel/accel.sh@17 -- # local accel_module 00:11:23.716 12:56:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:23.716 12:56:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:23.717 12:56:42 -- accel/accel.sh@12 -- # build_accel_config 00:11:23.717 12:56:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:23.717 12:56:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.717 12:56:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.717 12:56:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:23.717 12:56:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:23.717 12:56:42 -- accel/accel.sh@41 -- # local IFS=, 00:11:23.717 12:56:42 -- accel/accel.sh@42 -- # jq -r . 00:11:23.717 [2024-06-11 12:56:42.523377] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:23.717 [2024-06-11 12:56:42.523595] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110423 ] 00:11:23.976 [2024-06-11 12:56:42.691510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.235 [2024-06-11 12:56:42.892646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.137 12:56:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:26.137 00:11:26.137 SPDK Configuration: 00:11:26.137 Core mask: 0x1 00:11:26.137 00:11:26.137 Accel Perf Configuration: 00:11:26.137 Workload Type: decompress 00:11:26.137 Transfer size: 111250 bytes 00:11:26.137 Vector count 1 00:11:26.137 Module: software 00:11:26.137 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:26.137 Queue depth: 32 00:11:26.137 Allocate depth: 32 00:11:26.137 # threads/core: 2 00:11:26.137 Run time: 1 seconds 00:11:26.137 Verify: Yes 00:11:26.137 00:11:26.137 Running for 1 seconds... 00:11:26.137 00:11:26.137 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:26.137 ------------------------------------------------------------------------------------ 00:11:26.137 0,1 2624/s 108 MiB/s 0 0 00:11:26.137 0,0 2592/s 107 MiB/s 0 0 00:11:26.137 ==================================================================================== 00:11:26.137 Total 5216/s 553 MiB/s 0 0' 00:11:26.137 12:56:44 -- accel/accel.sh@20 -- # IFS=: 00:11:26.137 12:56:44 -- accel/accel.sh@20 -- # read -r var val 00:11:26.137 12:56:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:26.137 12:56:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:26.137 12:56:44 -- accel/accel.sh@12 -- # build_accel_config 00:11:26.137 12:56:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.137 12:56:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.137 12:56:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.137 12:56:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.137 12:56:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.137 12:56:44 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.137 12:56:44 -- accel/accel.sh@42 -- # jq -r . 00:11:26.137 [2024-06-11 12:56:44.933330] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:26.137 [2024-06-11 12:56:44.933564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110464 ] 00:11:26.396 [2024-06-11 12:56:45.100381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.655 [2024-06-11 12:56:45.300656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val= 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val= 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val= 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val=0x1 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val= 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val= 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val=decompress 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val= 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val=software 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@23 -- # accel_module=software 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val=32 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val=32 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.914 12:56:45 -- accel/accel.sh@21 -- # val=2 00:11:26.914 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.914 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.915 12:56:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:26.915 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.915 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.915 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.915 12:56:45 -- accel/accel.sh@21 -- # val=Yes 00:11:26.915 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.915 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.915 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.915 12:56:45 -- accel/accel.sh@21 -- # val= 00:11:26.915 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.915 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.915 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:26.915 12:56:45 -- accel/accel.sh@21 -- # val= 00:11:26.915 12:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.915 12:56:45 -- accel/accel.sh@20 -- # IFS=: 00:11:26.915 12:56:45 -- accel/accel.sh@20 -- # read -r var val 00:11:28.818 12:56:47 -- accel/accel.sh@21 -- # val= 00:11:28.818 12:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # IFS=: 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # read -r var val 00:11:28.818 12:56:47 -- accel/accel.sh@21 -- # val= 00:11:28.818 12:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # IFS=: 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # read -r var val 00:11:28.818 12:56:47 -- accel/accel.sh@21 -- # val= 00:11:28.818 12:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # IFS=: 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # read -r var val 00:11:28.818 12:56:47 -- accel/accel.sh@21 -- # val= 00:11:28.818 12:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # IFS=: 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # read -r var val 00:11:28.818 12:56:47 -- accel/accel.sh@21 -- # val= 00:11:28.818 12:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # IFS=: 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # read -r var val 00:11:28.818 12:56:47 -- accel/accel.sh@21 -- # val= 00:11:28.818 12:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # IFS=: 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # read -r var val 00:11:28.818 12:56:47 -- accel/accel.sh@21 -- # val= 00:11:28.818 12:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # IFS=: 00:11:28.818 12:56:47 -- accel/accel.sh@20 -- # read -r var val 00:11:28.818 12:56:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:28.818 12:56:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:28.818 12:56:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:28.818 ************************************ 00:11:28.818 END TEST accel_deomp_full_mthread 00:11:28.818 ************************************ 00:11:28.818 00:11:28.818 real 0m4.819s 00:11:28.818 user 0m4.306s 00:11:28.818 sys 0m0.357s 00:11:28.818 12:56:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.818 12:56:47 -- common/autotest_common.sh@10 -- # set +x 00:11:28.818 12:56:47 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:28.818 12:56:47 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:28.818 12:56:47 -- accel/accel.sh@129 -- # build_accel_config 00:11:28.818 12:56:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:28.818 12:56:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.818 12:56:47 -- common/autotest_common.sh@10 -- # set +x 00:11:28.818 12:56:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:28.818 12:56:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:28.818 12:56:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:28.818 12:56:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:28.818 12:56:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:28.818 12:56:47 -- accel/accel.sh@41 -- # local IFS=, 00:11:28.818 12:56:47 -- accel/accel.sh@42 -- # jq -r . 00:11:28.818 ************************************ 00:11:28.818 START TEST accel_dif_functional_tests 00:11:28.818 ************************************ 00:11:28.818 12:56:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:28.818 [2024-06-11 12:56:47.421215] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:28.818 [2024-06-11 12:56:47.421403] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110511 ] 00:11:28.818 [2024-06-11 12:56:47.596675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:29.077 [2024-06-11 12:56:47.774195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.077 [2024-06-11 12:56:47.774353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.077 [2024-06-11 12:56:47.774348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.336 00:11:29.336 00:11:29.336 CUnit - A unit testing framework for C - Version 2.1-3 00:11:29.336 http://cunit.sourceforge.net/ 00:11:29.336 00:11:29.336 00:11:29.336 Suite: accel_dif 00:11:29.336 Test: verify: DIF generated, GUARD check ...passed 00:11:29.336 Test: verify: DIF generated, APPTAG check ...passed 00:11:29.336 Test: verify: DIF generated, REFTAG check ...passed 00:11:29.336 Test: verify: DIF not generated, GUARD check ...[2024-06-11 12:56:48.055668] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:29.336 [2024-06-11 12:56:48.055810] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:29.336 passed 00:11:29.336 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 12:56:48.055903] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:29.336 [2024-06-11 12:56:48.055955] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:29.336 passed 00:11:29.336 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 12:56:48.056003] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:29.336 [2024-06-11 12:56:48.056066] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:29.336 passed 00:11:29.336 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:29.336 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 12:56:48.056206] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:29.336 passed 00:11:29.336 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:29.336 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:29.336 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:29.336 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 12:56:48.056445] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:29.336 passed 00:11:29.336 Test: generate copy: DIF generated, GUARD check ...passed 00:11:29.336 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:29.336 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:29.336 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:29.336 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:29.336 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:29.336 Test: generate copy: iovecs-len validate ...[2024-06-11 12:56:48.056880] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:29.336 passed 00:11:29.336 Test: generate copy: buffer alignment validate ...passed 00:11:29.336 00:11:29.336 Run Summary: Type Total Ran Passed Failed Inactive 00:11:29.336 suites 1 1 n/a 0 0 00:11:29.336 tests 20 20 20 0 0 00:11:29.336 asserts 204 204 204 0 n/a 00:11:29.336 00:11:29.336 Elapsed time = 0.001 seconds 00:11:30.287 00:11:30.287 real 0m1.735s 00:11:30.287 user 0m3.320s 00:11:30.287 sys 0m0.258s 00:11:30.287 ************************************ 00:11:30.287 END TEST accel_dif_functional_tests 00:11:30.287 ************************************ 00:11:30.287 12:56:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.287 12:56:49 -- common/autotest_common.sh@10 -- # set +x 00:11:30.287 ************************************ 00:11:30.287 END TEST accel 00:11:30.287 ************************************ 00:11:30.287 00:11:30.287 real 1m44.343s 00:11:30.287 user 1m55.282s 00:11:30.287 sys 0m8.780s 00:11:30.287 12:56:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.287 12:56:49 -- common/autotest_common.sh@10 -- # set +x 00:11:30.546 12:56:49 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:30.546 12:56:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:30.546 12:56:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:30.546 12:56:49 -- common/autotest_common.sh@10 -- # set +x 00:11:30.546 ************************************ 00:11:30.546 START TEST accel_rpc 00:11:30.546 ************************************ 00:11:30.546 12:56:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:30.546 * Looking for test storage... 00:11:30.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:30.546 12:56:49 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:30.546 12:56:49 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=110601 00:11:30.546 12:56:49 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:30.546 12:56:49 -- accel/accel_rpc.sh@15 -- # waitforlisten 110601 00:11:30.546 12:56:49 -- common/autotest_common.sh@819 -- # '[' -z 110601 ']' 00:11:30.546 12:56:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.546 12:56:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:30.546 12:56:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.546 12:56:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:30.546 12:56:49 -- common/autotest_common.sh@10 -- # set +x 00:11:30.546 [2024-06-11 12:56:49.334153] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:30.546 [2024-06-11 12:56:49.334469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110601 ] 00:11:30.804 [2024-06-11 12:56:49.518847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.063 [2024-06-11 12:56:49.736932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:31.063 [2024-06-11 12:56:49.737183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.630 12:56:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:31.630 12:56:50 -- common/autotest_common.sh@852 -- # return 0 00:11:31.630 12:56:50 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:31.630 12:56:50 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:31.630 12:56:50 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:31.630 12:56:50 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:31.630 12:56:50 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:31.631 12:56:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:31.631 12:56:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:31.631 12:56:50 -- common/autotest_common.sh@10 -- # set +x 00:11:31.631 ************************************ 00:11:31.631 START TEST accel_assign_opcode 00:11:31.631 ************************************ 00:11:31.631 12:56:50 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:11:31.631 12:56:50 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:31.631 12:56:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.631 12:56:50 -- common/autotest_common.sh@10 -- # set +x 00:11:31.631 [2024-06-11 12:56:50.214027] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:31.631 12:56:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.631 12:56:50 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:31.631 12:56:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.631 12:56:50 -- common/autotest_common.sh@10 -- # set +x 00:11:31.631 [2024-06-11 12:56:50.222009] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:31.631 12:56:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:31.631 12:56:50 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:31.631 12:56:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.631 12:56:50 -- common/autotest_common.sh@10 -- # set +x 00:11:32.198 12:56:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.198 12:56:50 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:32.198 12:56:50 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:32.198 12:56:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.198 12:56:50 -- common/autotest_common.sh@10 -- # set +x 00:11:32.198 12:56:50 -- accel/accel_rpc.sh@42 -- # grep software 00:11:32.198 12:56:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.198 software 00:11:32.198 ************************************ 00:11:32.198 END TEST accel_assign_opcode 00:11:32.198 ************************************ 00:11:32.198 00:11:32.198 real 0m0.755s 00:11:32.198 user 0m0.057s 00:11:32.198 sys 0m0.012s 00:11:32.198 12:56:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.198 12:56:50 -- common/autotest_common.sh@10 -- # set +x 00:11:32.198 12:56:50 -- accel/accel_rpc.sh@55 -- # killprocess 110601 00:11:32.198 12:56:50 -- common/autotest_common.sh@926 -- # '[' -z 110601 ']' 00:11:32.198 12:56:50 -- common/autotest_common.sh@930 -- # kill -0 110601 00:11:32.198 12:56:50 -- common/autotest_common.sh@931 -- # uname 00:11:32.198 12:56:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:32.198 12:56:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110601 00:11:32.198 killing process with pid 110601 00:11:32.198 12:56:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:32.198 12:56:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:32.198 12:56:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110601' 00:11:32.198 12:56:51 -- common/autotest_common.sh@945 -- # kill 110601 00:11:32.198 12:56:51 -- common/autotest_common.sh@950 -- # wait 110601 00:11:34.726 00:11:34.726 real 0m3.768s 00:11:34.726 user 0m3.735s 00:11:34.726 sys 0m0.505s 00:11:34.726 ************************************ 00:11:34.726 END TEST accel_rpc 00:11:34.726 ************************************ 00:11:34.726 12:56:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.726 12:56:52 -- common/autotest_common.sh@10 -- # set +x 00:11:34.726 12:56:52 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:34.726 12:56:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:34.726 12:56:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.726 12:56:52 -- common/autotest_common.sh@10 -- # set +x 00:11:34.726 ************************************ 00:11:34.726 START TEST app_cmdline 00:11:34.726 ************************************ 00:11:34.726 12:56:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:34.726 * Looking for test storage... 00:11:34.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:34.726 12:56:53 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:34.726 12:56:53 -- app/cmdline.sh@17 -- # spdk_tgt_pid=110752 00:11:34.726 12:56:53 -- app/cmdline.sh@18 -- # waitforlisten 110752 00:11:34.726 12:56:53 -- common/autotest_common.sh@819 -- # '[' -z 110752 ']' 00:11:34.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.726 12:56:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.726 12:56:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:34.726 12:56:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.726 12:56:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:34.726 12:56:53 -- common/autotest_common.sh@10 -- # set +x 00:11:34.726 12:56:53 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:34.726 [2024-06-11 12:56:53.129033] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:34.726 [2024-06-11 12:56:53.129509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110752 ] 00:11:34.726 [2024-06-11 12:56:53.295465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.726 [2024-06-11 12:56:53.479878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:34.726 [2024-06-11 12:56:53.480120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.100 12:56:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:36.100 12:56:54 -- common/autotest_common.sh@852 -- # return 0 00:11:36.100 12:56:54 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:36.359 { 00:11:36.359 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:11:36.359 "fields": { 00:11:36.359 "major": 24, 00:11:36.359 "minor": 1, 00:11:36.359 "patch": 1, 00:11:36.359 "suffix": "-pre", 00:11:36.359 "commit": "130b9406a" 00:11:36.359 } 00:11:36.359 } 00:11:36.359 12:56:55 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:36.359 12:56:55 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:36.359 12:56:55 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:36.359 12:56:55 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:36.359 12:56:55 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:36.359 12:56:55 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:36.359 12:56:55 -- app/cmdline.sh@26 -- # sort 00:11:36.359 12:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.359 12:56:55 -- common/autotest_common.sh@10 -- # set +x 00:11:36.359 12:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.359 12:56:55 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:36.359 12:56:55 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:36.359 12:56:55 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:36.359 12:56:55 -- common/autotest_common.sh@640 -- # local es=0 00:11:36.359 12:56:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:36.359 12:56:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.359 12:56:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:36.359 12:56:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.359 12:56:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:36.359 12:56:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.359 12:56:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:36.359 12:56:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.359 12:56:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:36.359 12:56:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:36.618 request: 00:11:36.618 { 00:11:36.618 "method": "env_dpdk_get_mem_stats", 00:11:36.618 "req_id": 1 00:11:36.618 } 00:11:36.618 Got JSON-RPC error response 00:11:36.618 response: 00:11:36.618 { 00:11:36.618 "code": -32601, 00:11:36.618 "message": "Method not found" 00:11:36.618 } 00:11:36.618 12:56:55 -- common/autotest_common.sh@643 -- # es=1 00:11:36.618 12:56:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:36.618 12:56:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:36.618 12:56:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:36.618 12:56:55 -- app/cmdline.sh@1 -- # killprocess 110752 00:11:36.618 12:56:55 -- common/autotest_common.sh@926 -- # '[' -z 110752 ']' 00:11:36.618 12:56:55 -- common/autotest_common.sh@930 -- # kill -0 110752 00:11:36.618 12:56:55 -- common/autotest_common.sh@931 -- # uname 00:11:36.618 12:56:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:36.618 12:56:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110752 00:11:36.618 12:56:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:36.618 12:56:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:36.618 12:56:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110752' 00:11:36.618 killing process with pid 110752 00:11:36.618 12:56:55 -- common/autotest_common.sh@945 -- # kill 110752 00:11:36.618 12:56:55 -- common/autotest_common.sh@950 -- # wait 110752 00:11:38.523 00:11:38.523 real 0m4.166s 00:11:38.523 user 0m4.759s 00:11:38.523 sys 0m0.548s 00:11:38.523 12:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.523 12:56:57 -- common/autotest_common.sh@10 -- # set +x 00:11:38.523 ************************************ 00:11:38.523 END TEST app_cmdline 00:11:38.523 ************************************ 00:11:38.523 12:56:57 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:38.523 12:56:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:38.523 12:56:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.523 12:56:57 -- common/autotest_common.sh@10 -- # set +x 00:11:38.523 ************************************ 00:11:38.523 START TEST version 00:11:38.523 ************************************ 00:11:38.523 12:56:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:38.523 * Looking for test storage... 00:11:38.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:38.523 12:56:57 -- app/version.sh@17 -- # get_header_version major 00:11:38.523 12:56:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:38.523 12:56:57 -- app/version.sh@14 -- # cut -f2 00:11:38.523 12:56:57 -- app/version.sh@14 -- # tr -d '"' 00:11:38.523 12:56:57 -- app/version.sh@17 -- # major=24 00:11:38.523 12:56:57 -- app/version.sh@18 -- # get_header_version minor 00:11:38.523 12:56:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:38.523 12:56:57 -- app/version.sh@14 -- # cut -f2 00:11:38.523 12:56:57 -- app/version.sh@14 -- # tr -d '"' 00:11:38.523 12:56:57 -- app/version.sh@18 -- # minor=1 00:11:38.523 12:56:57 -- app/version.sh@19 -- # get_header_version patch 00:11:38.523 12:56:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:38.523 12:56:57 -- app/version.sh@14 -- # cut -f2 00:11:38.523 12:56:57 -- app/version.sh@14 -- # tr -d '"' 00:11:38.523 12:56:57 -- app/version.sh@19 -- # patch=1 00:11:38.523 12:56:57 -- app/version.sh@20 -- # get_header_version suffix 00:11:38.523 12:56:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:38.523 12:56:57 -- app/version.sh@14 -- # cut -f2 00:11:38.523 12:56:57 -- app/version.sh@14 -- # tr -d '"' 00:11:38.523 12:56:57 -- app/version.sh@20 -- # suffix=-pre 00:11:38.523 12:56:57 -- app/version.sh@22 -- # version=24.1 00:11:38.523 12:56:57 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:38.523 12:56:57 -- app/version.sh@25 -- # version=24.1.1 00:11:38.523 12:56:57 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:38.523 12:56:57 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:38.523 12:56:57 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:38.523 12:56:57 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:38.523 12:56:57 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:38.523 00:11:38.523 real 0m0.148s 00:11:38.523 user 0m0.116s 00:11:38.523 sys 0m0.065s 00:11:38.523 12:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.523 ************************************ 00:11:38.523 END TEST version 00:11:38.523 ************************************ 00:11:38.523 12:56:57 -- common/autotest_common.sh@10 -- # set +x 00:11:38.781 12:56:57 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:11:38.781 12:56:57 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:38.781 12:56:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:38.782 12:56:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.782 12:56:57 -- common/autotest_common.sh@10 -- # set +x 00:11:38.782 ************************************ 00:11:38.782 START TEST blockdev_general 00:11:38.782 ************************************ 00:11:38.782 12:56:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:38.782 * Looking for test storage... 00:11:38.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:38.782 12:56:57 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:38.782 12:56:57 -- bdev/nbd_common.sh@6 -- # set -e 00:11:38.782 12:56:57 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:38.782 12:56:57 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:38.782 12:56:57 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:38.782 12:56:57 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:38.782 12:56:57 -- bdev/blockdev.sh@18 -- # : 00:11:38.782 12:56:57 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:38.782 12:56:57 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:38.782 12:56:57 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:38.782 12:56:57 -- bdev/blockdev.sh@672 -- # uname -s 00:11:38.782 12:56:57 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:38.782 12:56:57 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:38.782 12:56:57 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:38.782 12:56:57 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:38.782 12:56:57 -- bdev/blockdev.sh@682 -- # dek= 00:11:38.782 12:56:57 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:38.782 12:56:57 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:38.782 12:56:57 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:38.782 12:56:57 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:38.782 12:56:57 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:38.782 12:56:57 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:38.782 12:56:57 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=110931 00:11:38.782 12:56:57 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:38.782 12:56:57 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:38.782 12:56:57 -- bdev/blockdev.sh@47 -- # waitforlisten 110931 00:11:38.782 12:56:57 -- common/autotest_common.sh@819 -- # '[' -z 110931 ']' 00:11:38.782 12:56:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.782 12:56:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:38.782 12:56:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.782 12:56:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:38.782 12:56:57 -- common/autotest_common.sh@10 -- # set +x 00:11:38.782 [2024-06-11 12:56:57.544451] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:38.782 [2024-06-11 12:56:57.544649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110931 ] 00:11:39.040 [2024-06-11 12:56:57.709407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.299 [2024-06-11 12:56:57.889525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:39.299 [2024-06-11 12:56:57.889823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.865 12:56:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:39.865 12:56:58 -- common/autotest_common.sh@852 -- # return 0 00:11:39.865 12:56:58 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:39.865 12:56:58 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:39.865 12:56:58 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:39.865 12:56:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.865 12:56:58 -- common/autotest_common.sh@10 -- # set +x 00:11:40.432 [2024-06-11 12:56:59.135553] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:40.432 [2024-06-11 12:56:59.135665] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:40.432 00:11:40.432 [2024-06-11 12:56:59.143528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:40.432 [2024-06-11 12:56:59.143612] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:40.432 00:11:40.432 Malloc0 00:11:40.432 Malloc1 00:11:40.432 Malloc2 00:11:40.691 Malloc3 00:11:40.691 Malloc4 00:11:40.691 Malloc5 00:11:40.691 Malloc6 00:11:40.691 Malloc7 00:11:40.691 Malloc8 00:11:40.691 Malloc9 00:11:40.691 [2024-06-11 12:56:59.515285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:40.691 [2024-06-11 12:56:59.515398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.691 [2024-06-11 12:56:59.515430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:40.691 [2024-06-11 12:56:59.515456] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.691 [2024-06-11 12:56:59.517816] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.691 [2024-06-11 12:56:59.517895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:40.691 TestPT 00:11:40.951 12:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:40.951 12:56:59 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:40.951 5000+0 records in 00:11:40.951 5000+0 records out 00:11:40.951 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0265139 s, 386 MB/s 00:11:40.951 12:56:59 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:40.951 12:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:40.951 12:56:59 -- common/autotest_common.sh@10 -- # set +x 00:11:40.951 AIO0 00:11:40.951 12:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:40.951 12:56:59 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:40.951 12:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:40.951 12:56:59 -- common/autotest_common.sh@10 -- # set +x 00:11:40.951 12:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:40.951 12:56:59 -- bdev/blockdev.sh@738 -- # cat 00:11:40.951 12:56:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:40.951 12:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:40.951 12:56:59 -- common/autotest_common.sh@10 -- # set +x 00:11:40.951 12:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:40.951 12:56:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:40.951 12:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:40.951 12:56:59 -- common/autotest_common.sh@10 -- # set +x 00:11:40.951 12:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:40.951 12:56:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:40.951 12:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:40.951 12:56:59 -- common/autotest_common.sh@10 -- # set +x 00:11:40.951 12:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:40.951 12:56:59 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:40.951 12:56:59 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:40.951 12:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:40.951 12:56:59 -- common/autotest_common.sh@10 -- # set +x 00:11:40.951 12:56:59 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:40.951 12:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:41.214 12:56:59 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:41.214 12:56:59 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:41.215 12:56:59 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d4ef7981-04aa-41c5-bd36-25402e0b5a8d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4ef7981-04aa-41c5-bd36-25402e0b5a8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "24384e09-7911-5926-af6f-39dca7e09713"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "24384e09-7911-5926-af6f-39dca7e09713",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a2298633-1b06-5b53-ad6d-96656ffb240a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a2298633-1b06-5b53-ad6d-96656ffb240a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f45f29cb-f771-5007-96d8-e63df244523e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f45f29cb-f771-5007-96d8-e63df244523e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f1f516ca-a696-5999-b142-9ffaf8603d50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1f516ca-a696-5999-b142-9ffaf8603d50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7a1b29fd-85e4-5cf1-a63e-2d234fd3b865"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a1b29fd-85e4-5cf1-a63e-2d234fd3b865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "bd347b62-69c4-5c39-9dab-1e3ed610c360"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd347b62-69c4-5c39-9dab-1e3ed610c360",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a4bacf07-d7e2-5b5e-8f4b-30239154d39b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a4bacf07-d7e2-5b5e-8f4b-30239154d39b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "269bf124-0662-58c2-ad4b-837c5a58d1d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "269bf124-0662-58c2-ad4b-837c5a58d1d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "f6cc25d1-e36d-5973-9c5d-59f6f9c78d52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f6cc25d1-e36d-5973-9c5d-59f6f9c78d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "351e7602-2c9e-5b05-a3c7-9d2572040471"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "351e7602-2c9e-5b05-a3c7-9d2572040471",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fcec6c0a-b0b2-5ddc-99dc-46d180b015cf"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcec6c0a-b0b2-5ddc-99dc-46d180b015cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "233cbe36-1fdf-41b4-b25f-03f3bad1ff65"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "233cbe36-1fdf-41b4-b25f-03f3bad1ff65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "233cbe36-1fdf-41b4-b25f-03f3bad1ff65",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e3e1758c-9ada-46ad-9e4d-0611b2e2d013",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "459a27ec-368a-4d8e-9698-564da0d70b95",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "8f037dc4-ab21-4627-90b7-1d1accc5321a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8f037dc4-ab21-4627-90b7-1d1accc5321a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8f037dc4-ab21-4627-90b7-1d1accc5321a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "82ba9584-5ada-4266-931a-bd880a126de9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5aac7698-e522-4c45-9534-f30ba1a6ad41",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ca32978e-92ad-417b-8230-fe6489fbd354"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca32978e-92ad-417b-8230-fe6489fbd354",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ca32978e-92ad-417b-8230-fe6489fbd354",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2634d618-2893-4886-97ef-c76b9b7268a7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e7fd4883-0e23-4714-b440-5b20a91fa79f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ae133669-3fcf-4b1c-9ffc-62b8710617e2"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ae133669-3fcf-4b1c-9ffc-62b8710617e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:41.215 12:56:59 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:41.215 12:56:59 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:41.215 12:56:59 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:41.215 12:56:59 -- bdev/blockdev.sh@752 -- # killprocess 110931 00:11:41.215 12:56:59 -- common/autotest_common.sh@926 -- # '[' -z 110931 ']' 00:11:41.215 12:56:59 -- common/autotest_common.sh@930 -- # kill -0 110931 00:11:41.215 12:56:59 -- common/autotest_common.sh@931 -- # uname 00:11:41.215 12:56:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:41.215 12:56:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110931 00:11:41.215 12:56:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:41.215 12:56:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:41.215 12:56:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110931' 00:11:41.215 killing process with pid 110931 00:11:41.215 12:56:59 -- common/autotest_common.sh@945 -- # kill 110931 00:11:41.215 12:56:59 -- common/autotest_common.sh@950 -- # wait 110931 00:11:43.745 12:57:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:43.745 12:57:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:43.745 12:57:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:43.745 12:57:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.745 12:57:02 -- common/autotest_common.sh@10 -- # set +x 00:11:43.745 ************************************ 00:11:43.745 START TEST bdev_hello_world 00:11:43.745 ************************************ 00:11:43.745 12:57:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:43.745 [2024-06-11 12:57:02.577493] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:43.745 [2024-06-11 12:57:02.577659] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111026 ] 00:11:44.003 [2024-06-11 12:57:02.745003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.261 [2024-06-11 12:57:02.925554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.519 [2024-06-11 12:57:03.266093] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.519 [2024-06-11 12:57:03.266216] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.519 [2024-06-11 12:57:03.274068] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.519 [2024-06-11 12:57:03.274166] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.519 [2024-06-11 12:57:03.282087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.519 [2024-06-11 12:57:03.282145] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:44.519 [2024-06-11 12:57:03.282194] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:44.778 [2024-06-11 12:57:03.458470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.778 [2024-06-11 12:57:03.458613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.778 [2024-06-11 12:57:03.458659] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:44.778 [2024-06-11 12:57:03.458685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.778 [2024-06-11 12:57:03.460920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.778 [2024-06-11 12:57:03.460987] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:45.036 [2024-06-11 12:57:03.750821] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:45.036 [2024-06-11 12:57:03.750933] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:45.036 [2024-06-11 12:57:03.751049] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:45.036 [2024-06-11 12:57:03.751133] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:45.036 [2024-06-11 12:57:03.751264] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:45.036 [2024-06-11 12:57:03.751307] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:45.036 [2024-06-11 12:57:03.751385] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:45.036 00:11:45.036 [2024-06-11 12:57:03.751439] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:46.939 ************************************ 00:11:46.939 END TEST bdev_hello_world 00:11:46.939 ************************************ 00:11:46.939 00:11:46.939 real 0m2.965s 00:11:46.939 user 0m2.450s 00:11:46.939 sys 0m0.364s 00:11:46.939 12:57:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.939 12:57:05 -- common/autotest_common.sh@10 -- # set +x 00:11:46.939 12:57:05 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:46.939 12:57:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:46.939 12:57:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.939 12:57:05 -- common/autotest_common.sh@10 -- # set +x 00:11:46.939 ************************************ 00:11:46.939 START TEST bdev_bounds 00:11:46.939 ************************************ 00:11:46.939 12:57:05 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:11:46.939 12:57:05 -- bdev/blockdev.sh@288 -- # bdevio_pid=111095 00:11:46.939 Process bdevio pid: 111095 00:11:46.939 12:57:05 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:46.939 12:57:05 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:46.939 12:57:05 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 111095' 00:11:46.939 12:57:05 -- bdev/blockdev.sh@291 -- # waitforlisten 111095 00:11:46.939 12:57:05 -- common/autotest_common.sh@819 -- # '[' -z 111095 ']' 00:11:46.939 12:57:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.939 12:57:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:46.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.939 12:57:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.939 12:57:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:46.939 12:57:05 -- common/autotest_common.sh@10 -- # set +x 00:11:46.939 [2024-06-11 12:57:05.588681] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:46.939 [2024-06-11 12:57:05.588896] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111095 ] 00:11:46.939 [2024-06-11 12:57:05.764178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:47.197 [2024-06-11 12:57:05.948244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.197 [2024-06-11 12:57:05.948372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.197 [2024-06-11 12:57:05.948370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.767 [2024-06-11 12:57:06.301152] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:47.767 [2024-06-11 12:57:06.301512] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:47.767 [2024-06-11 12:57:06.309122] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:47.767 [2024-06-11 12:57:06.309317] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:47.767 [2024-06-11 12:57:06.317175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:47.767 [2024-06-11 12:57:06.317368] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:47.767 [2024-06-11 12:57:06.317520] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:47.767 [2024-06-11 12:57:06.507372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:47.767 [2024-06-11 12:57:06.507757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.767 [2024-06-11 12:57:06.507911] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:47.767 [2024-06-11 12:57:06.508023] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.767 [2024-06-11 12:57:06.510631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.767 [2024-06-11 12:57:06.510838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:48.705 12:57:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:48.705 12:57:07 -- common/autotest_common.sh@852 -- # return 0 00:11:48.705 12:57:07 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:48.705 I/O targets: 00:11:48.705 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:48.705 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:48.705 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:48.705 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:48.705 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:48.705 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:48.705 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:48.705 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:48.705 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:48.705 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:48.705 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:48.705 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:48.705 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:48.705 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:48.705 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:48.705 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:48.705 00:11:48.705 00:11:48.705 CUnit - A unit testing framework for C - Version 2.1-3 00:11:48.705 http://cunit.sourceforge.net/ 00:11:48.705 00:11:48.705 00:11:48.705 Suite: bdevio tests on: AIO0 00:11:48.705 Test: blockdev write read block ...passed 00:11:48.705 Test: blockdev write zeroes read block ...passed 00:11:48.705 Test: blockdev write zeroes read no split ...passed 00:11:48.705 Test: blockdev write zeroes read split ...passed 00:11:48.705 Test: blockdev write zeroes read split partial ...passed 00:11:48.705 Test: blockdev reset ...passed 00:11:48.705 Test: blockdev write read 8 blocks ...passed 00:11:48.705 Test: blockdev write read size > 128k ...passed 00:11:48.705 Test: blockdev write read invalid size ...passed 00:11:48.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.705 Test: blockdev write read max offset ...passed 00:11:48.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.705 Test: blockdev writev readv 8 blocks ...passed 00:11:48.705 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.705 Test: blockdev writev readv block ...passed 00:11:48.705 Test: blockdev writev readv size > 128k ...passed 00:11:48.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.705 Test: blockdev comparev and writev ...passed 00:11:48.705 Test: blockdev nvme passthru rw ...passed 00:11:48.705 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.705 Test: blockdev nvme admin passthru ...passed 00:11:48.705 Test: blockdev copy ...passed 00:11:48.705 Suite: bdevio tests on: raid1 00:11:48.705 Test: blockdev write read block ...passed 00:11:48.705 Test: blockdev write zeroes read block ...passed 00:11:48.705 Test: blockdev write zeroes read no split ...passed 00:11:48.705 Test: blockdev write zeroes read split ...passed 00:11:48.705 Test: blockdev write zeroes read split partial ...passed 00:11:48.705 Test: blockdev reset ...passed 00:11:48.705 Test: blockdev write read 8 blocks ...passed 00:11:48.705 Test: blockdev write read size > 128k ...passed 00:11:48.705 Test: blockdev write read invalid size ...passed 00:11:48.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.705 Test: blockdev write read max offset ...passed 00:11:48.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.705 Test: blockdev writev readv 8 blocks ...passed 00:11:48.705 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.705 Test: blockdev writev readv block ...passed 00:11:48.705 Test: blockdev writev readv size > 128k ...passed 00:11:48.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.706 Test: blockdev comparev and writev ...passed 00:11:48.706 Test: blockdev nvme passthru rw ...passed 00:11:48.706 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.706 Test: blockdev nvme admin passthru ...passed 00:11:48.706 Test: blockdev copy ...passed 00:11:48.706 Suite: bdevio tests on: concat0 00:11:48.706 Test: blockdev write read block ...passed 00:11:48.706 Test: blockdev write zeroes read block ...passed 00:11:48.706 Test: blockdev write zeroes read no split ...passed 00:11:48.706 Test: blockdev write zeroes read split ...passed 00:11:48.706 Test: blockdev write zeroes read split partial ...passed 00:11:48.706 Test: blockdev reset ...passed 00:11:48.706 Test: blockdev write read 8 blocks ...passed 00:11:48.706 Test: blockdev write read size > 128k ...passed 00:11:48.706 Test: blockdev write read invalid size ...passed 00:11:48.706 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.706 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.706 Test: blockdev write read max offset ...passed 00:11:48.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.706 Test: blockdev writev readv 8 blocks ...passed 00:11:48.706 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.706 Test: blockdev writev readv block ...passed 00:11:48.706 Test: blockdev writev readv size > 128k ...passed 00:11:48.706 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.706 Test: blockdev comparev and writev ...passed 00:11:48.706 Test: blockdev nvme passthru rw ...passed 00:11:48.706 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.706 Test: blockdev nvme admin passthru ...passed 00:11:48.706 Test: blockdev copy ...passed 00:11:48.706 Suite: bdevio tests on: raid0 00:11:48.706 Test: blockdev write read block ...passed 00:11:48.706 Test: blockdev write zeroes read block ...passed 00:11:48.706 Test: blockdev write zeroes read no split ...passed 00:11:48.706 Test: blockdev write zeroes read split ...passed 00:11:48.706 Test: blockdev write zeroes read split partial ...passed 00:11:48.706 Test: blockdev reset ...passed 00:11:48.706 Test: blockdev write read 8 blocks ...passed 00:11:48.706 Test: blockdev write read size > 128k ...passed 00:11:48.706 Test: blockdev write read invalid size ...passed 00:11:48.706 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.706 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.706 Test: blockdev write read max offset ...passed 00:11:48.706 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.706 Test: blockdev writev readv 8 blocks ...passed 00:11:48.706 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.706 Test: blockdev writev readv block ...passed 00:11:48.706 Test: blockdev writev readv size > 128k ...passed 00:11:48.706 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.706 Test: blockdev comparev and writev ...passed 00:11:48.706 Test: blockdev nvme passthru rw ...passed 00:11:48.706 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.706 Test: blockdev nvme admin passthru ...passed 00:11:48.706 Test: blockdev copy ...passed 00:11:48.706 Suite: bdevio tests on: TestPT 00:11:48.706 Test: blockdev write read block ...passed 00:11:48.706 Test: blockdev write zeroes read block ...passed 00:11:48.706 Test: blockdev write zeroes read no split ...passed 00:11:48.706 Test: blockdev write zeroes read split ...passed 00:11:48.965 Test: blockdev write zeroes read split partial ...passed 00:11:48.965 Test: blockdev reset ...passed 00:11:48.965 Test: blockdev write read 8 blocks ...passed 00:11:48.965 Test: blockdev write read size > 128k ...passed 00:11:48.965 Test: blockdev write read invalid size ...passed 00:11:48.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.965 Test: blockdev write read max offset ...passed 00:11:48.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.965 Test: blockdev writev readv 8 blocks ...passed 00:11:48.965 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.965 Test: blockdev writev readv block ...passed 00:11:48.965 Test: blockdev writev readv size > 128k ...passed 00:11:48.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.965 Test: blockdev comparev and writev ...passed 00:11:48.965 Test: blockdev nvme passthru rw ...passed 00:11:48.965 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.965 Test: blockdev nvme admin passthru ...passed 00:11:48.965 Test: blockdev copy ...passed 00:11:48.965 Suite: bdevio tests on: Malloc2p7 00:11:48.965 Test: blockdev write read block ...passed 00:11:48.965 Test: blockdev write zeroes read block ...passed 00:11:48.965 Test: blockdev write zeroes read no split ...passed 00:11:48.965 Test: blockdev write zeroes read split ...passed 00:11:48.965 Test: blockdev write zeroes read split partial ...passed 00:11:48.965 Test: blockdev reset ...passed 00:11:48.965 Test: blockdev write read 8 blocks ...passed 00:11:48.965 Test: blockdev write read size > 128k ...passed 00:11:48.965 Test: blockdev write read invalid size ...passed 00:11:48.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.965 Test: blockdev write read max offset ...passed 00:11:48.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.965 Test: blockdev writev readv 8 blocks ...passed 00:11:48.965 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.965 Test: blockdev writev readv block ...passed 00:11:48.965 Test: blockdev writev readv size > 128k ...passed 00:11:48.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.965 Test: blockdev comparev and writev ...passed 00:11:48.965 Test: blockdev nvme passthru rw ...passed 00:11:48.965 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.965 Test: blockdev nvme admin passthru ...passed 00:11:48.965 Test: blockdev copy ...passed 00:11:48.965 Suite: bdevio tests on: Malloc2p6 00:11:48.965 Test: blockdev write read block ...passed 00:11:48.965 Test: blockdev write zeroes read block ...passed 00:11:48.965 Test: blockdev write zeroes read no split ...passed 00:11:48.965 Test: blockdev write zeroes read split ...passed 00:11:48.965 Test: blockdev write zeroes read split partial ...passed 00:11:48.965 Test: blockdev reset ...passed 00:11:48.965 Test: blockdev write read 8 blocks ...passed 00:11:48.965 Test: blockdev write read size > 128k ...passed 00:11:48.965 Test: blockdev write read invalid size ...passed 00:11:48.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.965 Test: blockdev write read max offset ...passed 00:11:48.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.965 Test: blockdev writev readv 8 blocks ...passed 00:11:48.965 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.965 Test: blockdev writev readv block ...passed 00:11:48.965 Test: blockdev writev readv size > 128k ...passed 00:11:48.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.965 Test: blockdev comparev and writev ...passed 00:11:48.965 Test: blockdev nvme passthru rw ...passed 00:11:48.965 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.965 Test: blockdev nvme admin passthru ...passed 00:11:48.965 Test: blockdev copy ...passed 00:11:48.965 Suite: bdevio tests on: Malloc2p5 00:11:48.965 Test: blockdev write read block ...passed 00:11:48.965 Test: blockdev write zeroes read block ...passed 00:11:48.965 Test: blockdev write zeroes read no split ...passed 00:11:48.965 Test: blockdev write zeroes read split ...passed 00:11:48.965 Test: blockdev write zeroes read split partial ...passed 00:11:48.965 Test: blockdev reset ...passed 00:11:48.965 Test: blockdev write read 8 blocks ...passed 00:11:48.965 Test: blockdev write read size > 128k ...passed 00:11:48.965 Test: blockdev write read invalid size ...passed 00:11:48.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.965 Test: blockdev write read max offset ...passed 00:11:48.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.965 Test: blockdev writev readv 8 blocks ...passed 00:11:48.965 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.965 Test: blockdev writev readv block ...passed 00:11:48.965 Test: blockdev writev readv size > 128k ...passed 00:11:48.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.965 Test: blockdev comparev and writev ...passed 00:11:48.965 Test: blockdev nvme passthru rw ...passed 00:11:48.965 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.965 Test: blockdev nvme admin passthru ...passed 00:11:48.965 Test: blockdev copy ...passed 00:11:48.965 Suite: bdevio tests on: Malloc2p4 00:11:48.965 Test: blockdev write read block ...passed 00:11:48.965 Test: blockdev write zeroes read block ...passed 00:11:48.966 Test: blockdev write zeroes read no split ...passed 00:11:48.966 Test: blockdev write zeroes read split ...passed 00:11:48.966 Test: blockdev write zeroes read split partial ...passed 00:11:48.966 Test: blockdev reset ...passed 00:11:48.966 Test: blockdev write read 8 blocks ...passed 00:11:48.966 Test: blockdev write read size > 128k ...passed 00:11:48.966 Test: blockdev write read invalid size ...passed 00:11:48.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.966 Test: blockdev write read max offset ...passed 00:11:48.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.966 Test: blockdev writev readv 8 blocks ...passed 00:11:48.966 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.966 Test: blockdev writev readv block ...passed 00:11:48.966 Test: blockdev writev readv size > 128k ...passed 00:11:48.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.966 Test: blockdev comparev and writev ...passed 00:11:48.966 Test: blockdev nvme passthru rw ...passed 00:11:48.966 Test: blockdev nvme passthru vendor specific ...passed 00:11:48.966 Test: blockdev nvme admin passthru ...passed 00:11:48.966 Test: blockdev copy ...passed 00:11:48.966 Suite: bdevio tests on: Malloc2p3 00:11:48.966 Test: blockdev write read block ...passed 00:11:48.966 Test: blockdev write zeroes read block ...passed 00:11:48.966 Test: blockdev write zeroes read no split ...passed 00:11:49.224 Test: blockdev write zeroes read split ...passed 00:11:49.224 Test: blockdev write zeroes read split partial ...passed 00:11:49.224 Test: blockdev reset ...passed 00:11:49.224 Test: blockdev write read 8 blocks ...passed 00:11:49.224 Test: blockdev write read size > 128k ...passed 00:11:49.224 Test: blockdev write read invalid size ...passed 00:11:49.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.224 Test: blockdev write read max offset ...passed 00:11:49.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.224 Test: blockdev writev readv 8 blocks ...passed 00:11:49.224 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.224 Test: blockdev writev readv block ...passed 00:11:49.224 Test: blockdev writev readv size > 128k ...passed 00:11:49.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.224 Test: blockdev comparev and writev ...passed 00:11:49.224 Test: blockdev nvme passthru rw ...passed 00:11:49.224 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.224 Test: blockdev nvme admin passthru ...passed 00:11:49.224 Test: blockdev copy ...passed 00:11:49.224 Suite: bdevio tests on: Malloc2p2 00:11:49.224 Test: blockdev write read block ...passed 00:11:49.224 Test: blockdev write zeroes read block ...passed 00:11:49.224 Test: blockdev write zeroes read no split ...passed 00:11:49.224 Test: blockdev write zeroes read split ...passed 00:11:49.224 Test: blockdev write zeroes read split partial ...passed 00:11:49.224 Test: blockdev reset ...passed 00:11:49.224 Test: blockdev write read 8 blocks ...passed 00:11:49.224 Test: blockdev write read size > 128k ...passed 00:11:49.224 Test: blockdev write read invalid size ...passed 00:11:49.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.224 Test: blockdev write read max offset ...passed 00:11:49.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.224 Test: blockdev writev readv 8 blocks ...passed 00:11:49.224 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.224 Test: blockdev writev readv block ...passed 00:11:49.224 Test: blockdev writev readv size > 128k ...passed 00:11:49.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.224 Test: blockdev comparev and writev ...passed 00:11:49.224 Test: blockdev nvme passthru rw ...passed 00:11:49.224 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.224 Test: blockdev nvme admin passthru ...passed 00:11:49.224 Test: blockdev copy ...passed 00:11:49.224 Suite: bdevio tests on: Malloc2p1 00:11:49.224 Test: blockdev write read block ...passed 00:11:49.224 Test: blockdev write zeroes read block ...passed 00:11:49.224 Test: blockdev write zeroes read no split ...passed 00:11:49.224 Test: blockdev write zeroes read split ...passed 00:11:49.224 Test: blockdev write zeroes read split partial ...passed 00:11:49.224 Test: blockdev reset ...passed 00:11:49.224 Test: blockdev write read 8 blocks ...passed 00:11:49.224 Test: blockdev write read size > 128k ...passed 00:11:49.224 Test: blockdev write read invalid size ...passed 00:11:49.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.225 Test: blockdev write read max offset ...passed 00:11:49.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.225 Test: blockdev writev readv 8 blocks ...passed 00:11:49.225 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.225 Test: blockdev writev readv block ...passed 00:11:49.225 Test: blockdev writev readv size > 128k ...passed 00:11:49.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.225 Test: blockdev comparev and writev ...passed 00:11:49.225 Test: blockdev nvme passthru rw ...passed 00:11:49.225 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.225 Test: blockdev nvme admin passthru ...passed 00:11:49.225 Test: blockdev copy ...passed 00:11:49.225 Suite: bdevio tests on: Malloc2p0 00:11:49.225 Test: blockdev write read block ...passed 00:11:49.225 Test: blockdev write zeroes read block ...passed 00:11:49.225 Test: blockdev write zeroes read no split ...passed 00:11:49.225 Test: blockdev write zeroes read split ...passed 00:11:49.225 Test: blockdev write zeroes read split partial ...passed 00:11:49.225 Test: blockdev reset ...passed 00:11:49.225 Test: blockdev write read 8 blocks ...passed 00:11:49.225 Test: blockdev write read size > 128k ...passed 00:11:49.225 Test: blockdev write read invalid size ...passed 00:11:49.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.225 Test: blockdev write read max offset ...passed 00:11:49.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.225 Test: blockdev writev readv 8 blocks ...passed 00:11:49.225 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.225 Test: blockdev writev readv block ...passed 00:11:49.225 Test: blockdev writev readv size > 128k ...passed 00:11:49.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.225 Test: blockdev comparev and writev ...passed 00:11:49.225 Test: blockdev nvme passthru rw ...passed 00:11:49.225 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.225 Test: blockdev nvme admin passthru ...passed 00:11:49.225 Test: blockdev copy ...passed 00:11:49.225 Suite: bdevio tests on: Malloc1p1 00:11:49.225 Test: blockdev write read block ...passed 00:11:49.225 Test: blockdev write zeroes read block ...passed 00:11:49.225 Test: blockdev write zeroes read no split ...passed 00:11:49.225 Test: blockdev write zeroes read split ...passed 00:11:49.225 Test: blockdev write zeroes read split partial ...passed 00:11:49.225 Test: blockdev reset ...passed 00:11:49.225 Test: blockdev write read 8 blocks ...passed 00:11:49.225 Test: blockdev write read size > 128k ...passed 00:11:49.225 Test: blockdev write read invalid size ...passed 00:11:49.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.225 Test: blockdev write read max offset ...passed 00:11:49.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.225 Test: blockdev writev readv 8 blocks ...passed 00:11:49.225 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.225 Test: blockdev writev readv block ...passed 00:11:49.225 Test: blockdev writev readv size > 128k ...passed 00:11:49.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.225 Test: blockdev comparev and writev ...passed 00:11:49.225 Test: blockdev nvme passthru rw ...passed 00:11:49.225 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.225 Test: blockdev nvme admin passthru ...passed 00:11:49.225 Test: blockdev copy ...passed 00:11:49.225 Suite: bdevio tests on: Malloc1p0 00:11:49.225 Test: blockdev write read block ...passed 00:11:49.225 Test: blockdev write zeroes read block ...passed 00:11:49.225 Test: blockdev write zeroes read no split ...passed 00:11:49.225 Test: blockdev write zeroes read split ...passed 00:11:49.483 Test: blockdev write zeroes read split partial ...passed 00:11:49.483 Test: blockdev reset ...passed 00:11:49.483 Test: blockdev write read 8 blocks ...passed 00:11:49.483 Test: blockdev write read size > 128k ...passed 00:11:49.483 Test: blockdev write read invalid size ...passed 00:11:49.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.483 Test: blockdev write read max offset ...passed 00:11:49.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.484 Test: blockdev writev readv 8 blocks ...passed 00:11:49.484 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.484 Test: blockdev writev readv block ...passed 00:11:49.484 Test: blockdev writev readv size > 128k ...passed 00:11:49.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.484 Test: blockdev comparev and writev ...passed 00:11:49.484 Test: blockdev nvme passthru rw ...passed 00:11:49.484 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.484 Test: blockdev nvme admin passthru ...passed 00:11:49.484 Test: blockdev copy ...passed 00:11:49.484 Suite: bdevio tests on: Malloc0 00:11:49.484 Test: blockdev write read block ...passed 00:11:49.484 Test: blockdev write zeroes read block ...passed 00:11:49.484 Test: blockdev write zeroes read no split ...passed 00:11:49.484 Test: blockdev write zeroes read split ...passed 00:11:49.484 Test: blockdev write zeroes read split partial ...passed 00:11:49.484 Test: blockdev reset ...passed 00:11:49.484 Test: blockdev write read 8 blocks ...passed 00:11:49.484 Test: blockdev write read size > 128k ...passed 00:11:49.484 Test: blockdev write read invalid size ...passed 00:11:49.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:49.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:49.484 Test: blockdev write read max offset ...passed 00:11:49.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:49.484 Test: blockdev writev readv 8 blocks ...passed 00:11:49.484 Test: blockdev writev readv 30 x 1block ...passed 00:11:49.484 Test: blockdev writev readv block ...passed 00:11:49.484 Test: blockdev writev readv size > 128k ...passed 00:11:49.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:49.484 Test: blockdev comparev and writev ...passed 00:11:49.484 Test: blockdev nvme passthru rw ...passed 00:11:49.484 Test: blockdev nvme passthru vendor specific ...passed 00:11:49.484 Test: blockdev nvme admin passthru ...passed 00:11:49.484 Test: blockdev copy ...passed 00:11:49.484 00:11:49.484 Run Summary: Type Total Ran Passed Failed Inactive 00:11:49.484 suites 16 16 n/a 0 0 00:11:49.484 tests 368 368 368 0 0 00:11:49.484 asserts 2224 2224 2224 0 n/a 00:11:49.484 00:11:49.484 Elapsed time = 2.376 seconds 00:11:49.484 0 00:11:49.484 12:57:08 -- bdev/blockdev.sh@293 -- # killprocess 111095 00:11:49.484 12:57:08 -- common/autotest_common.sh@926 -- # '[' -z 111095 ']' 00:11:49.484 12:57:08 -- common/autotest_common.sh@930 -- # kill -0 111095 00:11:49.484 12:57:08 -- common/autotest_common.sh@931 -- # uname 00:11:49.484 12:57:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:49.484 12:57:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111095 00:11:49.484 12:57:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:49.484 12:57:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:49.484 12:57:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111095' 00:11:49.484 killing process with pid 111095 00:11:49.484 12:57:08 -- common/autotest_common.sh@945 -- # kill 111095 00:11:49.484 12:57:08 -- common/autotest_common.sh@950 -- # wait 111095 00:11:51.387 12:57:09 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:51.387 00:11:51.387 real 0m4.285s 00:11:51.387 user 0m11.126s 00:11:51.387 sys 0m0.558s 00:11:51.387 12:57:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.387 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:11:51.387 ************************************ 00:11:51.387 END TEST bdev_bounds 00:11:51.387 ************************************ 00:11:51.387 12:57:09 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:51.387 12:57:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:51.387 12:57:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:51.387 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:11:51.387 ************************************ 00:11:51.387 START TEST bdev_nbd 00:11:51.387 ************************************ 00:11:51.387 12:57:09 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:51.387 12:57:09 -- bdev/blockdev.sh@298 -- # uname -s 00:11:51.387 12:57:09 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:51.387 12:57:09 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.387 12:57:09 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:51.387 12:57:09 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:11:51.387 12:57:09 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:51.387 12:57:09 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:51.387 12:57:09 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:51.387 12:57:09 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:11:51.387 12:57:09 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:51.387 12:57:09 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:51.387 12:57:09 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:11:51.387 12:57:09 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:51.387 12:57:09 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:11:51.387 12:57:09 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:51.387 12:57:09 -- bdev/blockdev.sh@316 -- # nbd_pid=111184 00:11:51.387 12:57:09 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:51.387 12:57:09 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:51.387 12:57:09 -- bdev/blockdev.sh@318 -- # waitforlisten 111184 /var/tmp/spdk-nbd.sock 00:11:51.387 12:57:09 -- common/autotest_common.sh@819 -- # '[' -z 111184 ']' 00:11:51.387 12:57:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:51.387 12:57:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:51.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:51.387 12:57:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:51.387 12:57:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:51.387 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:11:51.387 [2024-06-11 12:57:09.931736] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:51.387 [2024-06-11 12:57:09.931952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.387 [2024-06-11 12:57:10.097381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.655 [2024-06-11 12:57:10.285587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.929 [2024-06-11 12:57:10.632725] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.929 [2024-06-11 12:57:10.633048] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.929 [2024-06-11 12:57:10.640690] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.929 [2024-06-11 12:57:10.640868] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.929 [2024-06-11 12:57:10.648694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.929 [2024-06-11 12:57:10.648844] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:51.929 [2024-06-11 12:57:10.648990] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:52.188 [2024-06-11 12:57:10.828003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:52.188 [2024-06-11 12:57:10.828327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.188 [2024-06-11 12:57:10.828468] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:52.188 [2024-06-11 12:57:10.828583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.188 [2024-06-11 12:57:10.831159] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.188 [2024-06-11 12:57:10.831343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:52.755 12:57:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:52.756 12:57:11 -- common/autotest_common.sh@852 -- # return 0 00:11:52.756 12:57:11 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@24 -- # local i 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:52.756 12:57:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:53.014 12:57:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:53.014 12:57:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:53.014 12:57:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:53.014 12:57:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:53.014 12:57:11 -- common/autotest_common.sh@857 -- # local i 00:11:53.014 12:57:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:53.014 12:57:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:53.014 12:57:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:53.014 12:57:11 -- common/autotest_common.sh@861 -- # break 00:11:53.014 12:57:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:53.014 12:57:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:53.014 12:57:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.014 1+0 records in 00:11:53.014 1+0 records out 00:11:53.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271666 s, 15.1 MB/s 00:11:53.014 12:57:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.014 12:57:11 -- common/autotest_common.sh@874 -- # size=4096 00:11:53.014 12:57:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.014 12:57:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:53.014 12:57:11 -- common/autotest_common.sh@877 -- # return 0 00:11:53.014 12:57:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.015 12:57:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:53.015 12:57:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:53.273 12:57:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:53.273 12:57:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:53.274 12:57:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:53.274 12:57:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:53.274 12:57:11 -- common/autotest_common.sh@857 -- # local i 00:11:53.274 12:57:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:53.274 12:57:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:53.274 12:57:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:53.274 12:57:11 -- common/autotest_common.sh@861 -- # break 00:11:53.274 12:57:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:53.274 12:57:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:53.274 12:57:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.274 1+0 records in 00:11:53.274 1+0 records out 00:11:53.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220523 s, 18.6 MB/s 00:11:53.274 12:57:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.274 12:57:11 -- common/autotest_common.sh@874 -- # size=4096 00:11:53.274 12:57:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.274 12:57:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:53.274 12:57:11 -- common/autotest_common.sh@877 -- # return 0 00:11:53.274 12:57:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.274 12:57:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:53.274 12:57:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:53.532 12:57:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:53.532 12:57:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:53.532 12:57:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:53.532 12:57:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:53.532 12:57:12 -- common/autotest_common.sh@857 -- # local i 00:11:53.532 12:57:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:53.532 12:57:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:53.532 12:57:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:53.532 12:57:12 -- common/autotest_common.sh@861 -- # break 00:11:53.532 12:57:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:53.532 12:57:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:53.532 12:57:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.532 1+0 records in 00:11:53.532 1+0 records out 00:11:53.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296371 s, 13.8 MB/s 00:11:53.532 12:57:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.532 12:57:12 -- common/autotest_common.sh@874 -- # size=4096 00:11:53.532 12:57:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.532 12:57:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:53.532 12:57:12 -- common/autotest_common.sh@877 -- # return 0 00:11:53.532 12:57:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.532 12:57:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:53.532 12:57:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:53.792 12:57:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:53.792 12:57:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:53.792 12:57:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:53.792 12:57:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:53.792 12:57:12 -- common/autotest_common.sh@857 -- # local i 00:11:53.792 12:57:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:53.792 12:57:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:53.792 12:57:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:53.792 12:57:12 -- common/autotest_common.sh@861 -- # break 00:11:53.792 12:57:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:53.792 12:57:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:53.792 12:57:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.792 1+0 records in 00:11:53.792 1+0 records out 00:11:53.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256519 s, 16.0 MB/s 00:11:53.792 12:57:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.792 12:57:12 -- common/autotest_common.sh@874 -- # size=4096 00:11:53.792 12:57:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.792 12:57:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:53.792 12:57:12 -- common/autotest_common.sh@877 -- # return 0 00:11:53.792 12:57:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.792 12:57:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:53.792 12:57:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:54.051 12:57:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:54.051 12:57:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:54.051 12:57:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:54.051 12:57:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:54.051 12:57:12 -- common/autotest_common.sh@857 -- # local i 00:11:54.051 12:57:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.051 12:57:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.051 12:57:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:54.051 12:57:12 -- common/autotest_common.sh@861 -- # break 00:11:54.051 12:57:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.051 12:57:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.051 12:57:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.051 1+0 records in 00:11:54.051 1+0 records out 00:11:54.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555315 s, 7.4 MB/s 00:11:54.051 12:57:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.051 12:57:12 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.051 12:57:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.051 12:57:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.051 12:57:12 -- common/autotest_common.sh@877 -- # return 0 00:11:54.051 12:57:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:54.051 12:57:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:54.051 12:57:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:54.310 12:57:13 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:54.310 12:57:13 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:54.310 12:57:13 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:54.310 12:57:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:54.310 12:57:13 -- common/autotest_common.sh@857 -- # local i 00:11:54.310 12:57:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.310 12:57:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.310 12:57:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:54.310 12:57:13 -- common/autotest_common.sh@861 -- # break 00:11:54.310 12:57:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.310 12:57:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.310 12:57:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.310 1+0 records in 00:11:54.310 1+0 records out 00:11:54.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376114 s, 10.9 MB/s 00:11:54.310 12:57:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.310 12:57:13 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.310 12:57:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.310 12:57:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.310 12:57:13 -- common/autotest_common.sh@877 -- # return 0 00:11:54.310 12:57:13 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:54.310 12:57:13 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:54.310 12:57:13 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:54.877 12:57:13 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:54.877 12:57:13 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:54.877 12:57:13 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:54.877 12:57:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:54.877 12:57:13 -- common/autotest_common.sh@857 -- # local i 00:11:54.877 12:57:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.877 12:57:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.877 12:57:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:54.877 12:57:13 -- common/autotest_common.sh@861 -- # break 00:11:54.877 12:57:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.877 12:57:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.877 12:57:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.877 1+0 records in 00:11:54.877 1+0 records out 00:11:54.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668758 s, 6.1 MB/s 00:11:54.877 12:57:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.877 12:57:13 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.877 12:57:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.877 12:57:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.877 12:57:13 -- common/autotest_common.sh@877 -- # return 0 00:11:54.877 12:57:13 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:54.877 12:57:13 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:54.877 12:57:13 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:55.136 12:57:13 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:55.136 12:57:13 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:55.136 12:57:13 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:55.136 12:57:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:11:55.136 12:57:13 -- common/autotest_common.sh@857 -- # local i 00:11:55.136 12:57:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.136 12:57:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.136 12:57:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:11:55.136 12:57:13 -- common/autotest_common.sh@861 -- # break 00:11:55.136 12:57:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.137 12:57:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.137 12:57:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.137 1+0 records in 00:11:55.137 1+0 records out 00:11:55.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402807 s, 10.2 MB/s 00:11:55.137 12:57:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.137 12:57:13 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.137 12:57:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.137 12:57:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.137 12:57:13 -- common/autotest_common.sh@877 -- # return 0 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:55.137 12:57:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:11:55.137 12:57:13 -- common/autotest_common.sh@857 -- # local i 00:11:55.137 12:57:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.137 12:57:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.137 12:57:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:11:55.137 12:57:13 -- common/autotest_common.sh@861 -- # break 00:11:55.137 12:57:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.137 12:57:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.137 12:57:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.137 1+0 records in 00:11:55.137 1+0 records out 00:11:55.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551862 s, 7.4 MB/s 00:11:55.137 12:57:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.137 12:57:13 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.137 12:57:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.137 12:57:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.137 12:57:13 -- common/autotest_common.sh@877 -- # return 0 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.137 12:57:13 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:55.705 12:57:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:55.705 12:57:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:55.705 12:57:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:55.705 12:57:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:11:55.705 12:57:14 -- common/autotest_common.sh@857 -- # local i 00:11:55.705 12:57:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.705 12:57:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.705 12:57:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:11:55.705 12:57:14 -- common/autotest_common.sh@861 -- # break 00:11:55.705 12:57:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.705 12:57:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.705 12:57:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.705 1+0 records in 00:11:55.705 1+0 records out 00:11:55.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464001 s, 8.8 MB/s 00:11:55.705 12:57:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.705 12:57:14 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.705 12:57:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.705 12:57:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.705 12:57:14 -- common/autotest_common.sh@877 -- # return 0 00:11:55.705 12:57:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.705 12:57:14 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.705 12:57:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:55.963 12:57:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:55.963 12:57:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:55.963 12:57:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:55.963 12:57:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:55.963 12:57:14 -- common/autotest_common.sh@857 -- # local i 00:11:55.963 12:57:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.963 12:57:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.963 12:57:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:55.963 12:57:14 -- common/autotest_common.sh@861 -- # break 00:11:55.963 12:57:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.963 12:57:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.963 12:57:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.963 1+0 records in 00:11:55.963 1+0 records out 00:11:55.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681841 s, 6.0 MB/s 00:11:55.963 12:57:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.963 12:57:14 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.963 12:57:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.963 12:57:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.963 12:57:14 -- common/autotest_common.sh@877 -- # return 0 00:11:55.963 12:57:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.963 12:57:14 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.963 12:57:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:56.220 12:57:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:56.220 12:57:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:56.220 12:57:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:56.220 12:57:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:56.220 12:57:14 -- common/autotest_common.sh@857 -- # local i 00:11:56.220 12:57:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.220 12:57:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.220 12:57:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:56.220 12:57:14 -- common/autotest_common.sh@861 -- # break 00:11:56.220 12:57:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.220 12:57:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.220 12:57:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.220 1+0 records in 00:11:56.220 1+0 records out 00:11:56.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633625 s, 6.5 MB/s 00:11:56.220 12:57:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.220 12:57:14 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.220 12:57:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.220 12:57:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.220 12:57:14 -- common/autotest_common.sh@877 -- # return 0 00:11:56.220 12:57:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.220 12:57:14 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.220 12:57:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:56.479 12:57:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:56.479 12:57:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:56.479 12:57:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:56.479 12:57:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:56.479 12:57:15 -- common/autotest_common.sh@857 -- # local i 00:11:56.479 12:57:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.479 12:57:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.479 12:57:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:56.479 12:57:15 -- common/autotest_common.sh@861 -- # break 00:11:56.479 12:57:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.479 12:57:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.480 12:57:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.480 1+0 records in 00:11:56.480 1+0 records out 00:11:56.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814364 s, 5.0 MB/s 00:11:56.480 12:57:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.480 12:57:15 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.480 12:57:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.480 12:57:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.480 12:57:15 -- common/autotest_common.sh@877 -- # return 0 00:11:56.480 12:57:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.480 12:57:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.480 12:57:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:56.738 12:57:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:56.738 12:57:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:56.738 12:57:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:56.738 12:57:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:56.738 12:57:15 -- common/autotest_common.sh@857 -- # local i 00:11:56.738 12:57:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.738 12:57:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.738 12:57:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:56.738 12:57:15 -- common/autotest_common.sh@861 -- # break 00:11:56.738 12:57:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.738 12:57:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.738 12:57:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.738 1+0 records in 00:11:56.738 1+0 records out 00:11:56.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594009 s, 6.9 MB/s 00:11:56.738 12:57:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.738 12:57:15 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.738 12:57:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.738 12:57:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.738 12:57:15 -- common/autotest_common.sh@877 -- # return 0 00:11:56.738 12:57:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.738 12:57:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.738 12:57:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:56.996 12:57:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:56.996 12:57:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:56.996 12:57:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:56.996 12:57:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:56.996 12:57:15 -- common/autotest_common.sh@857 -- # local i 00:11:56.996 12:57:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.996 12:57:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.996 12:57:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:56.996 12:57:15 -- common/autotest_common.sh@861 -- # break 00:11:56.996 12:57:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.996 12:57:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.996 12:57:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.996 1+0 records in 00:11:56.996 1+0 records out 00:11:56.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114818 s, 3.6 MB/s 00:11:56.996 12:57:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.996 12:57:15 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.996 12:57:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.996 12:57:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.996 12:57:15 -- common/autotest_common.sh@877 -- # return 0 00:11:56.996 12:57:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.996 12:57:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.997 12:57:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:57.255 12:57:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:57.255 12:57:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:57.255 12:57:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:57.255 12:57:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:11:57.255 12:57:15 -- common/autotest_common.sh@857 -- # local i 00:11:57.255 12:57:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:57.255 12:57:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:57.255 12:57:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:11:57.255 12:57:15 -- common/autotest_common.sh@861 -- # break 00:11:57.255 12:57:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:57.255 12:57:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:57.256 12:57:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.256 1+0 records in 00:11:57.256 1+0 records out 00:11:57.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111605 s, 3.7 MB/s 00:11:57.256 12:57:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.256 12:57:15 -- common/autotest_common.sh@874 -- # size=4096 00:11:57.256 12:57:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.256 12:57:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:57.256 12:57:15 -- common/autotest_common.sh@877 -- # return 0 00:11:57.256 12:57:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.256 12:57:15 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.256 12:57:15 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:57.514 12:57:16 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd0", 00:11:57.514 "bdev_name": "Malloc0" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd1", 00:11:57.514 "bdev_name": "Malloc1p0" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd2", 00:11:57.514 "bdev_name": "Malloc1p1" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd3", 00:11:57.514 "bdev_name": "Malloc2p0" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd4", 00:11:57.514 "bdev_name": "Malloc2p1" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd5", 00:11:57.514 "bdev_name": "Malloc2p2" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd6", 00:11:57.514 "bdev_name": "Malloc2p3" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd7", 00:11:57.514 "bdev_name": "Malloc2p4" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd8", 00:11:57.514 "bdev_name": "Malloc2p5" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd9", 00:11:57.514 "bdev_name": "Malloc2p6" 00:11:57.514 }, 00:11:57.514 { 00:11:57.514 "nbd_device": "/dev/nbd10", 00:11:57.515 "bdev_name": "Malloc2p7" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd11", 00:11:57.515 "bdev_name": "TestPT" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd12", 00:11:57.515 "bdev_name": "raid0" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd13", 00:11:57.515 "bdev_name": "concat0" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd14", 00:11:57.515 "bdev_name": "raid1" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd15", 00:11:57.515 "bdev_name": "AIO0" 00:11:57.515 } 00:11:57.515 ]' 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd0", 00:11:57.515 "bdev_name": "Malloc0" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd1", 00:11:57.515 "bdev_name": "Malloc1p0" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd2", 00:11:57.515 "bdev_name": "Malloc1p1" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd3", 00:11:57.515 "bdev_name": "Malloc2p0" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd4", 00:11:57.515 "bdev_name": "Malloc2p1" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd5", 00:11:57.515 "bdev_name": "Malloc2p2" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd6", 00:11:57.515 "bdev_name": "Malloc2p3" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd7", 00:11:57.515 "bdev_name": "Malloc2p4" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd8", 00:11:57.515 "bdev_name": "Malloc2p5" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd9", 00:11:57.515 "bdev_name": "Malloc2p6" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd10", 00:11:57.515 "bdev_name": "Malloc2p7" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd11", 00:11:57.515 "bdev_name": "TestPT" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd12", 00:11:57.515 "bdev_name": "raid0" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd13", 00:11:57.515 "bdev_name": "concat0" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd14", 00:11:57.515 "bdev_name": "raid1" 00:11:57.515 }, 00:11:57.515 { 00:11:57.515 "nbd_device": "/dev/nbd15", 00:11:57.515 "bdev_name": "AIO0" 00:11:57.515 } 00:11:57.515 ]' 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@51 -- # local i 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.515 12:57:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@41 -- # break 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.773 12:57:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@41 -- # break 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.032 12:57:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:58.291 12:57:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:58.291 12:57:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:58.291 12:57:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:58.291 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.291 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.291 12:57:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:58.291 12:57:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@41 -- # break 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:58.550 12:57:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:58.809 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:58.809 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.809 12:57:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:58.809 12:57:17 -- bdev/nbd_common.sh@41 -- # break 00:11:58.809 12:57:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.809 12:57:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.809 12:57:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@41 -- # break 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.068 12:57:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@41 -- # break 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.326 12:57:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:59.584 12:57:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:59.584 12:57:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:59.584 12:57:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:59.584 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.584 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.584 12:57:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:59.584 12:57:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:59.843 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:59.843 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.843 12:57:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:59.843 12:57:18 -- bdev/nbd_common.sh@41 -- # break 00:11:59.843 12:57:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.843 12:57:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.843 12:57:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@41 -- # break 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@41 -- # break 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.102 12:57:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@41 -- # break 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.361 12:57:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:00.619 12:57:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@41 -- # break 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:00.878 12:57:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:01.136 12:57:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:01.136 12:57:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.136 12:57:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:01.136 12:57:19 -- bdev/nbd_common.sh@41 -- # break 00:12:01.136 12:57:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.136 12:57:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.136 12:57:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:01.395 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.396 12:57:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:01.396 12:57:20 -- bdev/nbd_common.sh@41 -- # break 00:12:01.396 12:57:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.396 12:57:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.396 12:57:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:01.654 12:57:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:01.654 12:57:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:01.654 12:57:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:01.654 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.654 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.654 12:57:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:01.654 12:57:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@41 -- # break 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@41 -- # break 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.942 12:57:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:02.200 12:57:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:02.200 12:57:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:02.200 12:57:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:02.200 12:57:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.200 12:57:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.200 12:57:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:02.200 12:57:21 -- bdev/nbd_common.sh@41 -- # break 00:12:02.200 12:57:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.201 12:57:21 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:02.201 12:57:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.201 12:57:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@65 -- # true 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@65 -- # count=0 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@122 -- # count=0 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@127 -- # return 0 00:12:02.459 12:57:21 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@12 -- # local i 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:02.459 12:57:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:03.026 /dev/nbd0 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.026 12:57:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:03.026 12:57:21 -- common/autotest_common.sh@857 -- # local i 00:12:03.026 12:57:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.026 12:57:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.026 12:57:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:03.026 12:57:21 -- common/autotest_common.sh@861 -- # break 00:12:03.026 12:57:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.026 12:57:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.026 12:57:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.026 1+0 records in 00:12:03.026 1+0 records out 00:12:03.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003064 s, 13.4 MB/s 00:12:03.026 12:57:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.026 12:57:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.026 12:57:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.026 12:57:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.026 12:57:21 -- common/autotest_common.sh@877 -- # return 0 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:03.026 /dev/nbd1 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:03.026 12:57:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:03.026 12:57:21 -- common/autotest_common.sh@857 -- # local i 00:12:03.026 12:57:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.026 12:57:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.026 12:57:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:03.026 12:57:21 -- common/autotest_common.sh@861 -- # break 00:12:03.026 12:57:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.026 12:57:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.026 12:57:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.026 1+0 records in 00:12:03.026 1+0 records out 00:12:03.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387486 s, 10.6 MB/s 00:12:03.026 12:57:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.026 12:57:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.026 12:57:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.026 12:57:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.026 12:57:21 -- common/autotest_common.sh@877 -- # return 0 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.026 12:57:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:03.285 /dev/nbd10 00:12:03.285 12:57:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:03.285 12:57:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:03.285 12:57:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:03.285 12:57:22 -- common/autotest_common.sh@857 -- # local i 00:12:03.285 12:57:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.285 12:57:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.285 12:57:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:03.285 12:57:22 -- common/autotest_common.sh@861 -- # break 00:12:03.285 12:57:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.285 12:57:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.285 12:57:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.285 1+0 records in 00:12:03.285 1+0 records out 00:12:03.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274576 s, 14.9 MB/s 00:12:03.285 12:57:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.285 12:57:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.285 12:57:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.285 12:57:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.285 12:57:22 -- common/autotest_common.sh@877 -- # return 0 00:12:03.285 12:57:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.285 12:57:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.285 12:57:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:03.542 /dev/nbd11 00:12:03.542 12:57:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:03.542 12:57:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:03.542 12:57:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:03.542 12:57:22 -- common/autotest_common.sh@857 -- # local i 00:12:03.542 12:57:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.542 12:57:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.542 12:57:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:03.542 12:57:22 -- common/autotest_common.sh@861 -- # break 00:12:03.542 12:57:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.542 12:57:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.542 12:57:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.542 1+0 records in 00:12:03.542 1+0 records out 00:12:03.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433027 s, 9.5 MB/s 00:12:03.542 12:57:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.542 12:57:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.542 12:57:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.542 12:57:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.542 12:57:22 -- common/autotest_common.sh@877 -- # return 0 00:12:03.542 12:57:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.542 12:57:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.542 12:57:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:03.800 /dev/nbd12 00:12:03.800 12:57:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:03.800 12:57:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:03.800 12:57:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:03.800 12:57:22 -- common/autotest_common.sh@857 -- # local i 00:12:03.800 12:57:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.800 12:57:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.800 12:57:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:03.800 12:57:22 -- common/autotest_common.sh@861 -- # break 00:12:03.800 12:57:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.800 12:57:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.800 12:57:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.800 1+0 records in 00:12:03.800 1+0 records out 00:12:03.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450714 s, 9.1 MB/s 00:12:03.800 12:57:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.800 12:57:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.800 12:57:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.800 12:57:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.800 12:57:22 -- common/autotest_common.sh@877 -- # return 0 00:12:03.800 12:57:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.800 12:57:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.800 12:57:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:04.059 /dev/nbd13 00:12:04.059 12:57:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:04.059 12:57:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:04.059 12:57:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:04.059 12:57:22 -- common/autotest_common.sh@857 -- # local i 00:12:04.059 12:57:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.059 12:57:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.059 12:57:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:04.059 12:57:22 -- common/autotest_common.sh@861 -- # break 00:12:04.059 12:57:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.059 12:57:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.059 12:57:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.059 1+0 records in 00:12:04.059 1+0 records out 00:12:04.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298374 s, 13.7 MB/s 00:12:04.059 12:57:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.059 12:57:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.059 12:57:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.059 12:57:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.059 12:57:22 -- common/autotest_common.sh@877 -- # return 0 00:12:04.059 12:57:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.059 12:57:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.059 12:57:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:04.318 /dev/nbd14 00:12:04.318 12:57:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:04.318 12:57:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:04.318 12:57:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:04.318 12:57:23 -- common/autotest_common.sh@857 -- # local i 00:12:04.318 12:57:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.318 12:57:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.318 12:57:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:04.318 12:57:23 -- common/autotest_common.sh@861 -- # break 00:12:04.318 12:57:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.318 12:57:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.318 12:57:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.318 1+0 records in 00:12:04.318 1+0 records out 00:12:04.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709469 s, 5.8 MB/s 00:12:04.318 12:57:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.318 12:57:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.318 12:57:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.318 12:57:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.318 12:57:23 -- common/autotest_common.sh@877 -- # return 0 00:12:04.318 12:57:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.318 12:57:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.318 12:57:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:04.576 /dev/nbd15 00:12:04.576 12:57:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:04.576 12:57:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:04.576 12:57:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:04.576 12:57:23 -- common/autotest_common.sh@857 -- # local i 00:12:04.576 12:57:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.576 12:57:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.576 12:57:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:04.576 12:57:23 -- common/autotest_common.sh@861 -- # break 00:12:04.576 12:57:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.576 12:57:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.576 12:57:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.576 1+0 records in 00:12:04.576 1+0 records out 00:12:04.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461783 s, 8.9 MB/s 00:12:04.576 12:57:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.576 12:57:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.576 12:57:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.576 12:57:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.576 12:57:23 -- common/autotest_common.sh@877 -- # return 0 00:12:04.576 12:57:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.576 12:57:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.576 12:57:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:04.835 /dev/nbd2 00:12:04.835 12:57:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:04.835 12:57:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:04.835 12:57:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:04.835 12:57:23 -- common/autotest_common.sh@857 -- # local i 00:12:04.835 12:57:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.835 12:57:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.835 12:57:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:04.835 12:57:23 -- common/autotest_common.sh@861 -- # break 00:12:04.835 12:57:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.835 12:57:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.835 12:57:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.835 1+0 records in 00:12:04.835 1+0 records out 00:12:04.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137404 s, 3.0 MB/s 00:12:04.835 12:57:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.835 12:57:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.835 12:57:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.835 12:57:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.835 12:57:23 -- common/autotest_common.sh@877 -- # return 0 00:12:04.835 12:57:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.835 12:57:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.835 12:57:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:05.093 /dev/nbd3 00:12:05.093 12:57:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:05.093 12:57:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:05.093 12:57:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:05.093 12:57:23 -- common/autotest_common.sh@857 -- # local i 00:12:05.093 12:57:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:05.093 12:57:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:05.093 12:57:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:05.093 12:57:23 -- common/autotest_common.sh@861 -- # break 00:12:05.093 12:57:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:05.093 12:57:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:05.093 12:57:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.093 1+0 records in 00:12:05.093 1+0 records out 00:12:05.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615443 s, 6.7 MB/s 00:12:05.093 12:57:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.093 12:57:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:05.093 12:57:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.093 12:57:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:05.093 12:57:23 -- common/autotest_common.sh@877 -- # return 0 00:12:05.093 12:57:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.093 12:57:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.093 12:57:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:05.352 /dev/nbd4 00:12:05.352 12:57:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:05.352 12:57:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:05.352 12:57:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:05.352 12:57:24 -- common/autotest_common.sh@857 -- # local i 00:12:05.352 12:57:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:05.352 12:57:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:05.352 12:57:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:05.352 12:57:24 -- common/autotest_common.sh@861 -- # break 00:12:05.352 12:57:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:05.352 12:57:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:05.352 12:57:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.352 1+0 records in 00:12:05.352 1+0 records out 00:12:05.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474277 s, 8.6 MB/s 00:12:05.352 12:57:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.352 12:57:24 -- common/autotest_common.sh@874 -- # size=4096 00:12:05.352 12:57:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.352 12:57:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:05.352 12:57:24 -- common/autotest_common.sh@877 -- # return 0 00:12:05.352 12:57:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.352 12:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.352 12:57:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:05.611 /dev/nbd5 00:12:05.611 12:57:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:05.611 12:57:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:05.611 12:57:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:05.611 12:57:24 -- common/autotest_common.sh@857 -- # local i 00:12:05.611 12:57:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:05.611 12:57:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:05.611 12:57:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:05.611 12:57:24 -- common/autotest_common.sh@861 -- # break 00:12:05.611 12:57:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:05.611 12:57:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:05.611 12:57:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.611 1+0 records in 00:12:05.611 1+0 records out 00:12:05.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560909 s, 7.3 MB/s 00:12:05.611 12:57:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.611 12:57:24 -- common/autotest_common.sh@874 -- # size=4096 00:12:05.611 12:57:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.611 12:57:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:05.611 12:57:24 -- common/autotest_common.sh@877 -- # return 0 00:12:05.611 12:57:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.611 12:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.611 12:57:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:05.870 /dev/nbd6 00:12:05.870 12:57:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:05.870 12:57:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:05.870 12:57:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:05.870 12:57:24 -- common/autotest_common.sh@857 -- # local i 00:12:05.870 12:57:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:05.870 12:57:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:05.870 12:57:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:05.870 12:57:24 -- common/autotest_common.sh@861 -- # break 00:12:05.870 12:57:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:05.870 12:57:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:05.870 12:57:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.870 1+0 records in 00:12:05.870 1+0 records out 00:12:05.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465714 s, 8.8 MB/s 00:12:05.870 12:57:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.870 12:57:24 -- common/autotest_common.sh@874 -- # size=4096 00:12:05.870 12:57:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.870 12:57:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:05.870 12:57:24 -- common/autotest_common.sh@877 -- # return 0 00:12:05.870 12:57:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.870 12:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.870 12:57:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:06.129 /dev/nbd7 00:12:06.129 12:57:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:06.129 12:57:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:06.129 12:57:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:06.129 12:57:24 -- common/autotest_common.sh@857 -- # local i 00:12:06.129 12:57:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:06.129 12:57:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:06.129 12:57:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:06.129 12:57:24 -- common/autotest_common.sh@861 -- # break 00:12:06.129 12:57:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:06.129 12:57:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:06.129 12:57:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.129 1+0 records in 00:12:06.129 1+0 records out 00:12:06.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107618 s, 3.8 MB/s 00:12:06.129 12:57:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.129 12:57:24 -- common/autotest_common.sh@874 -- # size=4096 00:12:06.129 12:57:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.129 12:57:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:06.129 12:57:24 -- common/autotest_common.sh@877 -- # return 0 00:12:06.129 12:57:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.129 12:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.129 12:57:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:06.386 /dev/nbd8 00:12:06.386 12:57:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:06.386 12:57:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:06.386 12:57:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:06.386 12:57:25 -- common/autotest_common.sh@857 -- # local i 00:12:06.386 12:57:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:06.386 12:57:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:06.386 12:57:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:06.386 12:57:25 -- common/autotest_common.sh@861 -- # break 00:12:06.386 12:57:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:06.386 12:57:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:06.386 12:57:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.386 1+0 records in 00:12:06.386 1+0 records out 00:12:06.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107167 s, 3.8 MB/s 00:12:06.386 12:57:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.386 12:57:25 -- common/autotest_common.sh@874 -- # size=4096 00:12:06.386 12:57:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.386 12:57:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:06.386 12:57:25 -- common/autotest_common.sh@877 -- # return 0 00:12:06.386 12:57:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.386 12:57:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.386 12:57:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:06.643 /dev/nbd9 00:12:06.644 12:57:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:06.644 12:57:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:06.644 12:57:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:06.644 12:57:25 -- common/autotest_common.sh@857 -- # local i 00:12:06.644 12:57:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:06.644 12:57:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:06.644 12:57:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:06.644 12:57:25 -- common/autotest_common.sh@861 -- # break 00:12:06.644 12:57:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:06.644 12:57:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:06.644 12:57:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.644 1+0 records in 00:12:06.644 1+0 records out 00:12:06.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125135 s, 3.3 MB/s 00:12:06.644 12:57:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.644 12:57:25 -- common/autotest_common.sh@874 -- # size=4096 00:12:06.644 12:57:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.644 12:57:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:06.644 12:57:25 -- common/autotest_common.sh@877 -- # return 0 00:12:06.644 12:57:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.644 12:57:25 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.644 12:57:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:06.644 12:57:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.644 12:57:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:06.902 12:57:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd0", 00:12:06.902 "bdev_name": "Malloc0" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd1", 00:12:06.902 "bdev_name": "Malloc1p0" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd10", 00:12:06.902 "bdev_name": "Malloc1p1" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd11", 00:12:06.902 "bdev_name": "Malloc2p0" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd12", 00:12:06.902 "bdev_name": "Malloc2p1" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd13", 00:12:06.902 "bdev_name": "Malloc2p2" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd14", 00:12:06.902 "bdev_name": "Malloc2p3" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd15", 00:12:06.902 "bdev_name": "Malloc2p4" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd2", 00:12:06.902 "bdev_name": "Malloc2p5" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd3", 00:12:06.902 "bdev_name": "Malloc2p6" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd4", 00:12:06.902 "bdev_name": "Malloc2p7" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd5", 00:12:06.902 "bdev_name": "TestPT" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd6", 00:12:06.902 "bdev_name": "raid0" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd7", 00:12:06.902 "bdev_name": "concat0" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd8", 00:12:06.902 "bdev_name": "raid1" 00:12:06.902 }, 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd9", 00:12:06.902 "bdev_name": "AIO0" 00:12:06.902 } 00:12:06.902 ]' 00:12:06.902 12:57:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:06.902 { 00:12:06.902 "nbd_device": "/dev/nbd0", 00:12:06.902 "bdev_name": "Malloc0" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd1", 00:12:06.903 "bdev_name": "Malloc1p0" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd10", 00:12:06.903 "bdev_name": "Malloc1p1" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd11", 00:12:06.903 "bdev_name": "Malloc2p0" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd12", 00:12:06.903 "bdev_name": "Malloc2p1" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd13", 00:12:06.903 "bdev_name": "Malloc2p2" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd14", 00:12:06.903 "bdev_name": "Malloc2p3" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd15", 00:12:06.903 "bdev_name": "Malloc2p4" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd2", 00:12:06.903 "bdev_name": "Malloc2p5" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd3", 00:12:06.903 "bdev_name": "Malloc2p6" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd4", 00:12:06.903 "bdev_name": "Malloc2p7" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd5", 00:12:06.903 "bdev_name": "TestPT" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd6", 00:12:06.903 "bdev_name": "raid0" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd7", 00:12:06.903 "bdev_name": "concat0" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd8", 00:12:06.903 "bdev_name": "raid1" 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "nbd_device": "/dev/nbd9", 00:12:06.903 "bdev_name": "AIO0" 00:12:06.903 } 00:12:06.903 ]' 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:06.903 /dev/nbd1 00:12:06.903 /dev/nbd10 00:12:06.903 /dev/nbd11 00:12:06.903 /dev/nbd12 00:12:06.903 /dev/nbd13 00:12:06.903 /dev/nbd14 00:12:06.903 /dev/nbd15 00:12:06.903 /dev/nbd2 00:12:06.903 /dev/nbd3 00:12:06.903 /dev/nbd4 00:12:06.903 /dev/nbd5 00:12:06.903 /dev/nbd6 00:12:06.903 /dev/nbd7 00:12:06.903 /dev/nbd8 00:12:06.903 /dev/nbd9' 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:06.903 /dev/nbd1 00:12:06.903 /dev/nbd10 00:12:06.903 /dev/nbd11 00:12:06.903 /dev/nbd12 00:12:06.903 /dev/nbd13 00:12:06.903 /dev/nbd14 00:12:06.903 /dev/nbd15 00:12:06.903 /dev/nbd2 00:12:06.903 /dev/nbd3 00:12:06.903 /dev/nbd4 00:12:06.903 /dev/nbd5 00:12:06.903 /dev/nbd6 00:12:06.903 /dev/nbd7 00:12:06.903 /dev/nbd8 00:12:06.903 /dev/nbd9' 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@65 -- # count=16 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@95 -- # count=16 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:06.903 256+0 records in 00:12:06.903 256+0 records out 00:12:06.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00780595 s, 134 MB/s 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:06.903 12:57:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:07.161 256+0 records in 00:12:07.161 256+0 records out 00:12:07.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138509 s, 7.6 MB/s 00:12:07.161 12:57:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.161 12:57:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:07.161 256+0 records in 00:12:07.161 256+0 records out 00:12:07.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145797 s, 7.2 MB/s 00:12:07.161 12:57:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.161 12:57:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:07.420 256+0 records in 00:12:07.420 256+0 records out 00:12:07.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144585 s, 7.3 MB/s 00:12:07.420 12:57:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.420 12:57:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:07.420 256+0 records in 00:12:07.420 256+0 records out 00:12:07.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141279 s, 7.4 MB/s 00:12:07.420 12:57:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.420 12:57:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:07.678 256+0 records in 00:12:07.678 256+0 records out 00:12:07.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138392 s, 7.6 MB/s 00:12:07.678 12:57:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.678 12:57:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:07.678 256+0 records in 00:12:07.678 256+0 records out 00:12:07.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143048 s, 7.3 MB/s 00:12:07.678 12:57:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.678 12:57:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:07.936 256+0 records in 00:12:07.936 256+0 records out 00:12:07.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143013 s, 7.3 MB/s 00:12:07.936 12:57:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:07.936 12:57:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:08.195 256+0 records in 00:12:08.195 256+0 records out 00:12:08.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143339 s, 7.3 MB/s 00:12:08.195 12:57:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.195 12:57:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:08.195 256+0 records in 00:12:08.195 256+0 records out 00:12:08.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142974 s, 7.3 MB/s 00:12:08.195 12:57:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.195 12:57:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:08.453 256+0 records in 00:12:08.453 256+0 records out 00:12:08.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144804 s, 7.2 MB/s 00:12:08.453 12:57:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.453 12:57:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:08.453 256+0 records in 00:12:08.453 256+0 records out 00:12:08.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136235 s, 7.7 MB/s 00:12:08.453 12:57:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.453 12:57:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:08.712 256+0 records in 00:12:08.712 256+0 records out 00:12:08.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145171 s, 7.2 MB/s 00:12:08.712 12:57:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.712 12:57:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:08.712 256+0 records in 00:12:08.712 256+0 records out 00:12:08.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14874 s, 7.0 MB/s 00:12:08.712 12:57:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.712 12:57:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:08.970 256+0 records in 00:12:08.970 256+0 records out 00:12:08.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146119 s, 7.2 MB/s 00:12:08.970 12:57:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.970 12:57:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:09.228 256+0 records in 00:12:09.228 256+0 records out 00:12:09.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147504 s, 7.1 MB/s 00:12:09.228 12:57:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.228 12:57:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:09.228 256+0 records in 00:12:09.228 256+0 records out 00:12:09.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221149 s, 4.7 MB/s 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.228 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@51 -- # local i 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.487 12:57:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@41 -- # break 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.799 12:57:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@41 -- # break 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.056 12:57:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:10.314 12:57:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:10.314 12:57:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:10.314 12:57:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:10.314 12:57:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.314 12:57:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.314 12:57:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:10.314 12:57:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:10.314 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:10.314 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.314 12:57:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:10.314 12:57:29 -- bdev/nbd_common.sh@41 -- # break 00:12:10.314 12:57:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.314 12:57:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.314 12:57:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:10.573 12:57:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:10.573 12:57:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:10.573 12:57:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:10.573 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.573 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.573 12:57:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:10.573 12:57:29 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:10.832 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:10.832 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.832 12:57:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:10.832 12:57:29 -- bdev/nbd_common.sh@41 -- # break 00:12:10.832 12:57:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.832 12:57:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.832 12:57:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@41 -- # break 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.090 12:57:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:11.349 12:57:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:11.349 12:57:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:11.349 12:57:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:11.349 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.349 12:57:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.349 12:57:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:11.349 12:57:29 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:11.349 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:11.349 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.349 12:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:11.349 12:57:30 -- bdev/nbd_common.sh@41 -- # break 00:12:11.349 12:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.349 12:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.349 12:57:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@41 -- # break 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.607 12:57:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@41 -- # break 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.866 12:57:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@41 -- # break 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.125 12:57:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@41 -- # break 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.383 12:57:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:12.642 12:57:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:12.642 12:57:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:12.642 12:57:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:12.642 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.642 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.642 12:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:12.642 12:57:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@41 -- # break 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:12.901 12:57:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:13.159 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:13.159 12:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.159 12:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:13.159 12:57:31 -- bdev/nbd_common.sh@41 -- # break 00:12:13.159 12:57:31 -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.159 12:57:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.159 12:57:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@41 -- # break 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.427 12:57:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:13.688 12:57:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:13.688 12:57:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:13.688 12:57:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:13.688 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.688 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.688 12:57:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:13.688 12:57:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:13.946 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@41 -- # break 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@41 -- # break 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.947 12:57:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@41 -- # break 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.205 12:57:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:14.463 12:57:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:14.463 12:57:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:14.463 12:57:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@65 -- # true 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@65 -- # count=0 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@104 -- # count=0 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@109 -- # return 0 00:12:14.721 12:57:33 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:14.721 12:57:33 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:14.979 malloc_lvol_verify 00:12:14.979 12:57:33 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:15.237 9c0a7550-c84a-431e-93b6-a0b6cc018e28 00:12:15.237 12:57:33 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:15.237 69380644-4604-4327-9499-2cb177ec8883 00:12:15.237 12:57:34 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:15.495 /dev/nbd0 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:15.495 mke2fs 1.45.5 (07-Jan-2020) 00:12:15.495 00:12:15.495 Filesystem too small for a journal 00:12:15.495 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:15.495 00:12:15.495 Allocating group tables: 0/1 done 00:12:15.495 Writing inode tables: 0/1 done 00:12:15.495 Writing superblocks and filesystem accounting information: 0/1 done 00:12:15.495 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@51 -- # local i 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.495 12:57:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@41 -- # break 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:15.753 12:57:34 -- bdev/nbd_common.sh@147 -- # return 0 00:12:15.753 12:57:34 -- bdev/blockdev.sh@324 -- # killprocess 111184 00:12:15.753 12:57:34 -- common/autotest_common.sh@926 -- # '[' -z 111184 ']' 00:12:15.753 12:57:34 -- common/autotest_common.sh@930 -- # kill -0 111184 00:12:15.753 12:57:34 -- common/autotest_common.sh@931 -- # uname 00:12:15.753 12:57:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:15.753 12:57:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111184 00:12:16.011 12:57:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:16.011 12:57:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:16.011 12:57:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111184' 00:12:16.011 killing process with pid 111184 00:12:16.011 12:57:34 -- common/autotest_common.sh@945 -- # kill 111184 00:12:16.011 12:57:34 -- common/autotest_common.sh@950 -- # wait 111184 00:12:17.913 12:57:36 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:17.913 00:12:17.913 real 0m26.494s 00:12:17.913 user 0m35.264s 00:12:17.913 sys 0m9.040s 00:12:17.913 12:57:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.913 12:57:36 -- common/autotest_common.sh@10 -- # set +x 00:12:17.913 ************************************ 00:12:17.913 END TEST bdev_nbd 00:12:17.913 ************************************ 00:12:17.913 12:57:36 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:17.913 12:57:36 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:17.913 12:57:36 -- common/autotest_common.sh@10 -- # set +x 00:12:17.913 ************************************ 00:12:17.913 START TEST bdev_fio 00:12:17.913 ************************************ 00:12:17.913 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:17.913 12:57:36 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@329 -- # local env_context 00:12:17.913 12:57:36 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:17.913 12:57:36 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:17.913 12:57:36 -- bdev/blockdev.sh@337 -- # echo '' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:17.913 12:57:36 -- bdev/blockdev.sh@337 -- # env_context= 00:12:17.913 12:57:36 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:17.913 12:57:36 -- common/autotest_common.sh@1260 -- # local workload=verify 00:12:17.913 12:57:36 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:12:17.913 12:57:36 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:17.913 12:57:36 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:17.913 12:57:36 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:17.913 12:57:36 -- common/autotest_common.sh@1280 -- # cat 00:12:17.913 12:57:36 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1293 -- # cat 00:12:17.913 12:57:36 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:12:17.913 12:57:36 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:17.913 12:57:36 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:17.913 12:57:36 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.913 12:57:36 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:17.913 12:57:36 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:17.913 12:57:36 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:17.913 12:57:36 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:17.913 12:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:17.913 12:57:36 -- common/autotest_common.sh@10 -- # set +x 00:12:17.913 ************************************ 00:12:17.913 START TEST bdev_fio_rw_verify 00:12:17.913 ************************************ 00:12:17.913 12:57:36 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:17.913 12:57:36 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:17.913 12:57:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:17.913 12:57:36 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:12:17.913 12:57:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:17.913 12:57:36 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:17.913 12:57:36 -- common/autotest_common.sh@1320 -- # shift 00:12:17.913 12:57:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:17.914 12:57:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:17.914 12:57:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:17.914 12:57:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:17.914 12:57:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:17.914 12:57:36 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:12:17.914 12:57:36 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:12:17.914 12:57:36 -- common/autotest_common.sh@1326 -- # break 00:12:17.914 12:57:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:17.914 12:57:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:17.914 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.914 fio-3.35 00:12:17.914 Starting 16 threads 00:12:30.115 00:12:30.115 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=112464: Tue Jun 11 12:57:48 2024 00:12:30.115 read: IOPS=86.2k, BW=337MiB/s (353MB/s)(3369MiB/10008msec) 00:12:30.115 slat (usec): min=2, max=36036, avg=31.00, stdev=371.60 00:12:30.115 clat (usec): min=10, max=36242, avg=253.03, stdev=1112.24 00:12:30.115 lat (usec): min=26, max=36261, avg=284.03, stdev=1173.21 00:12:30.115 clat percentiles (usec): 00:12:30.115 | 50.000th=[ 157], 99.000th=[ 652], 99.900th=[16188], 99.990th=[24249], 00:12:30.115 | 99.999th=[34341] 00:12:30.115 write: IOPS=138k, BW=540MiB/s (566MB/s)(5330MiB/9871msec); 0 zone resets 00:12:30.115 slat (usec): min=5, max=66578, avg=58.62, stdev=555.49 00:12:30.115 clat (usec): min=9, max=66877, avg=337.41, stdev=1309.38 00:12:30.115 lat (usec): min=38, max=66917, avg=396.03, stdev=1421.82 00:12:30.115 clat percentiles (usec): 00:12:30.115 | 50.000th=[ 204], 99.000th=[ 2704], 99.900th=[16319], 99.990th=[27919], 00:12:30.115 | 99.999th=[39584] 00:12:30.115 bw ( KiB/s): min=365888, max=866792, per=98.42%, avg=544211.66, stdev=9412.10, samples=305 00:12:30.115 iops : min=91472, max=216698, avg=136052.81, stdev=2353.02, samples=305 00:12:30.115 lat (usec) : 10=0.01%, 20=0.01%, 50=0.71%, 100=14.96%, 250=60.25% 00:12:30.115 lat (usec) : 500=20.85%, 750=1.71%, 1000=0.50% 00:12:30.115 lat (msec) : 2=0.11%, 4=0.09%, 10=0.25%, 20=0.53%, 50=0.04% 00:12:30.115 lat (msec) : 100=0.01% 00:12:30.115 cpu : usr=58.13%, sys=1.90%, ctx=233624, majf=0, minf=94524 00:12:30.115 IO depths : 1=11.6%, 2=24.1%, 4=51.3%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.115 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.115 issued rwts: total=862471,1364526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.115 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:30.115 00:12:30.115 Run status group 0 (all jobs): 00:12:30.115 READ: bw=337MiB/s (353MB/s), 337MiB/s-337MiB/s (353MB/s-353MB/s), io=3369MiB (3533MB), run=10008-10008msec 00:12:30.115 WRITE: bw=540MiB/s (566MB/s), 540MiB/s-540MiB/s (566MB/s-566MB/s), io=5330MiB (5589MB), run=9871-9871msec 00:12:31.491 ----------------------------------------------------- 00:12:31.491 Suppressions used: 00:12:31.491 count bytes template 00:12:31.491 16 140 /usr/src/fio/parse.c 00:12:31.491 10906 1046976 /usr/src/fio/iolog.c 00:12:31.491 2 596 libcrypto.so 00:12:31.491 ----------------------------------------------------- 00:12:31.491 00:12:31.491 ************************************ 00:12:31.491 END TEST bdev_fio_rw_verify 00:12:31.491 ************************************ 00:12:31.491 00:12:31.491 real 0m13.727s 00:12:31.491 user 1m38.125s 00:12:31.491 sys 0m3.921s 00:12:31.491 12:57:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.491 12:57:50 -- common/autotest_common.sh@10 -- # set +x 00:12:31.491 12:57:50 -- bdev/blockdev.sh@348 -- # rm -f 00:12:31.491 12:57:50 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.491 12:57:50 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:31.491 12:57:50 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.491 12:57:50 -- common/autotest_common.sh@1260 -- # local workload=trim 00:12:31.491 12:57:50 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:12:31.491 12:57:50 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:31.491 12:57:50 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:31.491 12:57:50 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:31.491 12:57:50 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:12:31.491 12:57:50 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:31.491 12:57:50 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.491 12:57:50 -- common/autotest_common.sh@1280 -- # cat 00:12:31.491 12:57:50 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:12:31.491 12:57:50 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:12:31.491 12:57:50 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:12:31.491 12:57:50 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:31.492 12:57:50 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d4ef7981-04aa-41c5-bd36-25402e0b5a8d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4ef7981-04aa-41c5-bd36-25402e0b5a8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "24384e09-7911-5926-af6f-39dca7e09713"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "24384e09-7911-5926-af6f-39dca7e09713",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a2298633-1b06-5b53-ad6d-96656ffb240a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a2298633-1b06-5b53-ad6d-96656ffb240a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f45f29cb-f771-5007-96d8-e63df244523e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f45f29cb-f771-5007-96d8-e63df244523e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f1f516ca-a696-5999-b142-9ffaf8603d50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1f516ca-a696-5999-b142-9ffaf8603d50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7a1b29fd-85e4-5cf1-a63e-2d234fd3b865"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a1b29fd-85e4-5cf1-a63e-2d234fd3b865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "bd347b62-69c4-5c39-9dab-1e3ed610c360"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd347b62-69c4-5c39-9dab-1e3ed610c360",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a4bacf07-d7e2-5b5e-8f4b-30239154d39b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a4bacf07-d7e2-5b5e-8f4b-30239154d39b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "269bf124-0662-58c2-ad4b-837c5a58d1d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "269bf124-0662-58c2-ad4b-837c5a58d1d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "f6cc25d1-e36d-5973-9c5d-59f6f9c78d52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f6cc25d1-e36d-5973-9c5d-59f6f9c78d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "351e7602-2c9e-5b05-a3c7-9d2572040471"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "351e7602-2c9e-5b05-a3c7-9d2572040471",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fcec6c0a-b0b2-5ddc-99dc-46d180b015cf"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcec6c0a-b0b2-5ddc-99dc-46d180b015cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "233cbe36-1fdf-41b4-b25f-03f3bad1ff65"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "233cbe36-1fdf-41b4-b25f-03f3bad1ff65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "233cbe36-1fdf-41b4-b25f-03f3bad1ff65",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e3e1758c-9ada-46ad-9e4d-0611b2e2d013",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "459a27ec-368a-4d8e-9698-564da0d70b95",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "8f037dc4-ab21-4627-90b7-1d1accc5321a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8f037dc4-ab21-4627-90b7-1d1accc5321a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8f037dc4-ab21-4627-90b7-1d1accc5321a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "82ba9584-5ada-4266-931a-bd880a126de9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5aac7698-e522-4c45-9534-f30ba1a6ad41",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ca32978e-92ad-417b-8230-fe6489fbd354"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca32978e-92ad-417b-8230-fe6489fbd354",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ca32978e-92ad-417b-8230-fe6489fbd354",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2634d618-2893-4886-97ef-c76b9b7268a7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e7fd4883-0e23-4714-b440-5b20a91fa79f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ae133669-3fcf-4b1c-9ffc-62b8710617e2"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ae133669-3fcf-4b1c-9ffc-62b8710617e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:31.492 12:57:50 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:31.492 Malloc1p0 00:12:31.492 Malloc1p1 00:12:31.492 Malloc2p0 00:12:31.492 Malloc2p1 00:12:31.492 Malloc2p2 00:12:31.492 Malloc2p3 00:12:31.492 Malloc2p4 00:12:31.492 Malloc2p5 00:12:31.492 Malloc2p6 00:12:31.492 Malloc2p7 00:12:31.492 TestPT 00:12:31.492 raid0 00:12:31.492 concat0 ]] 00:12:31.492 12:57:50 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:31.494 12:57:50 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d4ef7981-04aa-41c5-bd36-25402e0b5a8d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4ef7981-04aa-41c5-bd36-25402e0b5a8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "24384e09-7911-5926-af6f-39dca7e09713"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "24384e09-7911-5926-af6f-39dca7e09713",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a2298633-1b06-5b53-ad6d-96656ffb240a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a2298633-1b06-5b53-ad6d-96656ffb240a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "f45f29cb-f771-5007-96d8-e63df244523e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f45f29cb-f771-5007-96d8-e63df244523e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f1f516ca-a696-5999-b142-9ffaf8603d50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1f516ca-a696-5999-b142-9ffaf8603d50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7a1b29fd-85e4-5cf1-a63e-2d234fd3b865"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a1b29fd-85e4-5cf1-a63e-2d234fd3b865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "bd347b62-69c4-5c39-9dab-1e3ed610c360"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd347b62-69c4-5c39-9dab-1e3ed610c360",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a4bacf07-d7e2-5b5e-8f4b-30239154d39b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a4bacf07-d7e2-5b5e-8f4b-30239154d39b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "269bf124-0662-58c2-ad4b-837c5a58d1d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "269bf124-0662-58c2-ad4b-837c5a58d1d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "f6cc25d1-e36d-5973-9c5d-59f6f9c78d52"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f6cc25d1-e36d-5973-9c5d-59f6f9c78d52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "351e7602-2c9e-5b05-a3c7-9d2572040471"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "351e7602-2c9e-5b05-a3c7-9d2572040471",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fcec6c0a-b0b2-5ddc-99dc-46d180b015cf"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcec6c0a-b0b2-5ddc-99dc-46d180b015cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "233cbe36-1fdf-41b4-b25f-03f3bad1ff65"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "233cbe36-1fdf-41b4-b25f-03f3bad1ff65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "233cbe36-1fdf-41b4-b25f-03f3bad1ff65",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "e3e1758c-9ada-46ad-9e4d-0611b2e2d013",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "459a27ec-368a-4d8e-9698-564da0d70b95",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "8f037dc4-ab21-4627-90b7-1d1accc5321a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8f037dc4-ab21-4627-90b7-1d1accc5321a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8f037dc4-ab21-4627-90b7-1d1accc5321a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "82ba9584-5ada-4266-931a-bd880a126de9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5aac7698-e522-4c45-9534-f30ba1a6ad41",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ca32978e-92ad-417b-8230-fe6489fbd354"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca32978e-92ad-417b-8230-fe6489fbd354",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ca32978e-92ad-417b-8230-fe6489fbd354",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2634d618-2893-4886-97ef-c76b9b7268a7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "e7fd4883-0e23-4714-b440-5b20a91fa79f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ae133669-3fcf-4b1c-9ffc-62b8710617e2"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ae133669-3fcf-4b1c-9ffc-62b8710617e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:31.752 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.752 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:31.752 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:31.752 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.752 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:31.752 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:31.752 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.752 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:31.752 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:31.752 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.752 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:31.752 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:31.752 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:31.753 12:57:50 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.753 12:57:50 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:31.753 12:57:50 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:31.753 12:57:50 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.753 12:57:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:31.753 12:57:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:31.753 12:57:50 -- common/autotest_common.sh@10 -- # set +x 00:12:31.753 ************************************ 00:12:31.753 START TEST bdev_fio_trim 00:12:31.753 ************************************ 00:12:31.753 12:57:50 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.753 12:57:50 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.753 12:57:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:31.753 12:57:50 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:12:31.753 12:57:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:31.753 12:57:50 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:31.753 12:57:50 -- common/autotest_common.sh@1320 -- # shift 00:12:31.753 12:57:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:31.753 12:57:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:31.753 12:57:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:31.753 12:57:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:31.753 12:57:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:31.753 12:57:50 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:12:31.753 12:57:50 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:12:31.753 12:57:50 -- common/autotest_common.sh@1326 -- # break 00:12:31.753 12:57:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:31.753 12:57:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.753 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.753 fio-3.35 00:12:31.753 Starting 14 threads 00:12:43.944 00:12:43.944 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=112708: Tue Jun 11 12:58:01 2024 00:12:43.944 write: IOPS=136k, BW=532MiB/s (557MB/s)(5320MiB/10007msec); 0 zone resets 00:12:43.944 slat (usec): min=2, max=36055, avg=36.59, stdev=398.76 00:12:43.944 clat (usec): min=18, max=40292, avg=257.87, stdev=1083.49 00:12:43.944 lat (usec): min=24, max=40313, avg=294.46, stdev=1153.92 00:12:43.944 clat percentiles (usec): 00:12:43.944 | 50.000th=[ 172], 99.000th=[ 433], 99.900th=[16319], 99.990th=[20317], 00:12:43.944 | 99.999th=[28181] 00:12:43.944 bw ( KiB/s): min=371104, max=890920, per=100.00%, avg=547850.74, stdev=11142.79, samples=266 00:12:43.944 iops : min=92776, max=222730, avg=136962.63, stdev=2785.70, samples=266 00:12:43.944 trim: IOPS=136k, BW=532MiB/s (557MB/s)(5320MiB/10007msec); 0 zone resets 00:12:43.944 slat (usec): min=3, max=28040, avg=24.80, stdev=326.63 00:12:43.944 clat (usec): min=3, max=40313, avg=283.82, stdev=1115.75 00:12:43.944 lat (usec): min=8, max=40330, avg=308.62, stdev=1162.28 00:12:43.944 clat percentiles (usec): 00:12:43.944 | 50.000th=[ 194], 99.000th=[ 420], 99.900th=[16319], 99.990th=[20317], 00:12:43.944 | 99.999th=[28181] 00:12:43.944 bw ( KiB/s): min=371104, max=890928, per=100.00%, avg=547851.16, stdev=11142.89, samples=266 00:12:43.944 iops : min=92776, max=222732, avg=136962.74, stdev=2785.72, samples=266 00:12:43.944 lat (usec) : 4=0.01%, 10=0.03%, 20=0.09%, 50=0.57%, 100=8.18% 00:12:43.944 lat (usec) : 250=70.43%, 500=20.00%, 750=0.10%, 1000=0.02% 00:12:43.944 lat (msec) : 2=0.02%, 4=0.01%, 10=0.09%, 20=0.45%, 50=0.02% 00:12:43.944 cpu : usr=68.84%, sys=0.53%, ctx=168965, majf=0, minf=800 00:12:43.944 IO depths : 1=12.4%, 2=24.8%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.944 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.944 issued rwts: total=0,1361810,1361813,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.944 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:43.944 00:12:43.944 Run status group 0 (all jobs): 00:12:43.944 WRITE: bw=532MiB/s (557MB/s), 532MiB/s-532MiB/s (557MB/s-557MB/s), io=5320MiB (5578MB), run=10007-10007msec 00:12:43.944 TRIM: bw=532MiB/s (557MB/s), 532MiB/s-532MiB/s (557MB/s-557MB/s), io=5320MiB (5578MB), run=10007-10007msec 00:12:45.351 ----------------------------------------------------- 00:12:45.351 Suppressions used: 00:12:45.351 count bytes template 00:12:45.351 14 129 /usr/src/fio/parse.c 00:12:45.351 2 596 libcrypto.so 00:12:45.351 ----------------------------------------------------- 00:12:45.351 00:12:45.351 ************************************ 00:12:45.351 END TEST bdev_fio_trim 00:12:45.351 ************************************ 00:12:45.351 00:12:45.351 real 0m13.475s 00:12:45.351 user 1m41.102s 00:12:45.351 sys 0m1.623s 00:12:45.351 12:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.351 12:58:03 -- common/autotest_common.sh@10 -- # set +x 00:12:45.351 12:58:03 -- bdev/blockdev.sh@366 -- # rm -f 00:12:45.351 12:58:03 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:45.351 /home/vagrant/spdk_repo/spdk 00:12:45.351 ************************************ 00:12:45.351 END TEST bdev_fio 00:12:45.351 ************************************ 00:12:45.351 12:58:03 -- bdev/blockdev.sh@368 -- # popd 00:12:45.351 12:58:03 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:45.351 00:12:45.351 real 0m27.514s 00:12:45.351 user 3m19.449s 00:12:45.351 sys 0m5.618s 00:12:45.351 12:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.351 12:58:03 -- common/autotest_common.sh@10 -- # set +x 00:12:45.351 12:58:03 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:45.351 12:58:03 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:45.351 12:58:03 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:45.351 12:58:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.351 12:58:03 -- common/autotest_common.sh@10 -- # set +x 00:12:45.351 ************************************ 00:12:45.351 START TEST bdev_verify 00:12:45.351 ************************************ 00:12:45.351 12:58:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:45.351 [2024-06-11 12:58:04.061246] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:45.352 [2024-06-11 12:58:04.061946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112920 ] 00:12:45.611 [2024-06-11 12:58:04.242456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.870 [2024-06-11 12:58:04.483975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.870 [2024-06-11 12:58:04.483976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.128 [2024-06-11 12:58:04.833974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.128 [2024-06-11 12:58:04.834351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.128 [2024-06-11 12:58:04.841945] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.128 [2024-06-11 12:58:04.842189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.128 [2024-06-11 12:58:04.849997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.128 [2024-06-11 12:58:04.850216] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:46.128 [2024-06-11 12:58:04.850365] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:46.387 [2024-06-11 12:58:05.025746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.387 [2024-06-11 12:58:05.026135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.387 [2024-06-11 12:58:05.026364] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:46.387 [2024-06-11 12:58:05.026498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.387 [2024-06-11 12:58:05.029274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.387 [2024-06-11 12:58:05.029483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:46.645 Running I/O for 5 seconds... 00:12:51.919 00:12:51.919 Latency(us) 00:12:51.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.919 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x0 length 0x1000 00:12:51.919 Malloc0 : 5.22 1267.94 4.95 0.00 0.00 100408.43 3038.49 179211.17 00:12:51.919 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x1000 length 0x1000 00:12:51.919 Malloc0 : 5.17 1353.42 5.29 0.00 0.00 93793.46 2189.50 229733.47 00:12:51.919 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x0 length 0x800 00:12:51.919 Malloc1p0 : 5.22 874.47 3.42 0.00 0.00 145455.61 4885.41 167772.16 00:12:51.919 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x800 length 0x800 00:12:51.919 Malloc1p0 : 5.18 948.13 3.70 0.00 0.00 133735.69 4855.62 139174.63 00:12:51.919 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x0 length 0x800 00:12:51.919 Malloc1p1 : 5.22 874.28 3.42 0.00 0.00 145227.14 5362.04 163005.91 00:12:51.919 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x800 length 0x800 00:12:51.919 Malloc1p1 : 5.18 947.91 3.70 0.00 0.00 133507.03 4706.68 134408.38 00:12:51.919 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x0 length 0x200 00:12:51.919 Malloc2p0 : 5.22 874.07 3.41 0.00 0.00 144965.59 4825.83 159192.90 00:12:51.919 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x200 length 0x200 00:12:51.919 Malloc2p0 : 5.18 947.70 3.70 0.00 0.00 133298.56 4468.36 129642.12 00:12:51.919 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x0 length 0x200 00:12:51.919 Malloc2p1 : 5.22 873.87 3.41 0.00 0.00 144722.96 4855.62 154426.65 00:12:51.919 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x200 length 0x200 00:12:51.919 Malloc2p1 : 5.18 947.48 3.70 0.00 0.00 133126.43 4468.36 123922.62 00:12:51.919 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x0 length 0x200 00:12:51.919 Malloc2p2 : 5.22 873.69 3.41 0.00 0.00 144456.62 5183.30 148707.14 00:12:51.919 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.919 Verification LBA range: start 0x200 length 0x200 00:12:51.920 Malloc2p2 : 5.18 947.27 3.70 0.00 0.00 132923.06 4498.15 118679.74 00:12:51.920 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x200 00:12:51.920 Malloc2p3 : 5.23 873.49 3.41 0.00 0.00 144184.37 5421.61 142987.64 00:12:51.920 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x200 length 0x200 00:12:51.920 Malloc2p3 : 5.18 947.08 3.70 0.00 0.00 132725.75 4676.89 112483.61 00:12:51.920 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x200 00:12:51.920 Malloc2p4 : 5.23 873.31 3.41 0.00 0.00 143881.60 5064.15 136314.88 00:12:51.920 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x200 length 0x200 00:12:51.920 Malloc2p4 : 5.18 946.86 3.70 0.00 0.00 132505.62 4438.57 107240.73 00:12:51.920 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x200 00:12:51.920 Malloc2p5 : 5.23 873.11 3.41 0.00 0.00 143625.41 4915.20 131548.63 00:12:51.920 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x200 length 0x200 00:12:51.920 Malloc2p5 : 5.18 946.65 3.70 0.00 0.00 132299.33 4468.36 103904.35 00:12:51.920 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x200 00:12:51.920 Malloc2p6 : 5.23 872.92 3.41 0.00 0.00 143369.05 4289.63 127735.62 00:12:51.920 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x200 length 0x200 00:12:51.920 Malloc2p6 : 5.20 959.47 3.75 0.00 0.00 130808.20 4349.21 102951.10 00:12:51.920 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x200 00:12:51.920 Malloc2p7 : 5.23 872.72 3.41 0.00 0.00 143150.88 4349.21 122969.37 00:12:51.920 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x200 length 0x200 00:12:51.920 Malloc2p7 : 5.20 959.24 3.75 0.00 0.00 130617.96 4319.42 102951.10 00:12:51.920 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x1000 00:12:51.920 TestPT : 5.23 872.52 3.41 0.00 0.00 142893.38 4885.41 118203.11 00:12:51.920 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x1000 length 0x1000 00:12:51.920 TestPT : 5.20 946.89 3.70 0.00 0.00 132122.10 4796.04 103904.35 00:12:51.920 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x2000 00:12:51.920 raid0 : 5.23 872.33 3.41 0.00 0.00 142608.24 4855.62 112960.23 00:12:51.920 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x2000 length 0x2000 00:12:51.920 raid0 : 5.20 958.77 3.75 0.00 0.00 130308.29 4527.94 103904.35 00:12:51.920 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x2000 00:12:51.920 concat0 : 5.23 872.13 3.41 0.00 0.00 142351.34 4885.41 107717.35 00:12:51.920 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x2000 length 0x2000 00:12:51.920 concat0 : 5.21 958.53 3.74 0.00 0.00 130126.41 4408.79 103904.35 00:12:51.920 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x1000 00:12:51.920 raid1 : 5.23 871.94 3.41 0.00 0.00 142058.23 5183.30 108670.60 00:12:51.920 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x1000 length 0x1000 00:12:51.920 raid1 : 5.21 958.28 3.74 0.00 0.00 129900.33 5540.77 103904.35 00:12:51.920 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x0 length 0x4e2 00:12:51.920 AIO0 : 5.24 871.39 3.40 0.00 0.00 141801.62 5242.88 109147.23 00:12:51.920 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.920 Verification LBA range: start 0x4e2 length 0x4e2 00:12:51.920 AIO0 : 5.21 957.70 3.74 0.00 0.00 129685.19 4944.99 104380.97 00:12:51.920 =================================================================================================================== 00:12:51.920 Total : 29995.57 117.17 0.00 0.00 133976.81 2189.50 229733.47 00:12:53.952 ************************************ 00:12:53.952 END TEST bdev_verify 00:12:53.952 ************************************ 00:12:53.952 00:12:53.952 real 0m8.654s 00:12:53.952 user 0m15.456s 00:12:53.952 sys 0m0.720s 00:12:53.952 12:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.952 12:58:12 -- common/autotest_common.sh@10 -- # set +x 00:12:53.952 12:58:12 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:53.952 12:58:12 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:53.952 12:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:53.952 12:58:12 -- common/autotest_common.sh@10 -- # set +x 00:12:53.952 ************************************ 00:12:53.952 START TEST bdev_verify_big_io 00:12:53.952 ************************************ 00:12:53.952 12:58:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:53.952 [2024-06-11 12:58:12.729268] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:53.952 [2024-06-11 12:58:12.729629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113055 ] 00:12:54.210 [2024-06-11 12:58:12.885279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:54.470 [2024-06-11 12:58:13.064162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.470 [2024-06-11 12:58:13.064166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.731 [2024-06-11 12:58:13.409758] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:54.731 [2024-06-11 12:58:13.410038] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:54.731 [2024-06-11 12:58:13.417719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:54.731 [2024-06-11 12:58:13.417933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:54.731 [2024-06-11 12:58:13.425781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:54.731 [2024-06-11 12:58:13.426009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:54.731 [2024-06-11 12:58:13.426142] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:54.991 [2024-06-11 12:58:13.605225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:54.991 [2024-06-11 12:58:13.605723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.991 [2024-06-11 12:58:13.605934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:54.991 [2024-06-11 12:58:13.606086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.991 [2024-06-11 12:58:13.608757] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.991 [2024-06-11 12:58:13.608936] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:55.250 [2024-06-11 12:58:13.947143] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:55.250 [2024-06-11 12:58:13.950454] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:55.250 [2024-06-11 12:58:13.954257] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:55.250 [2024-06-11 12:58:13.958027] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:55.250 [2024-06-11 12:58:13.961200] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:55.250 [2024-06-11 12:58:13.964909] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:55.250 [2024-06-11 12:58:13.968123] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:55.250 [2024-06-11 12:58:13.971927] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:13.975112] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:13.978691] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:13.981884] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:13.985637] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:13.988782] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:13.992400] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:13.996223] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:13.999374] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:55.251 [2024-06-11 12:58:14.075573] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:55.251 [2024-06-11 12:58:14.081596] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:55.509 Running I/O for 5 seconds... 00:13:02.076 00:13:02.076 Latency(us) 00:13:02.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.076 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.076 Verification LBA range: start 0x0 length 0x100 00:13:02.076 Malloc0 : 5.49 456.51 28.53 0.00 0.00 272716.88 15252.01 869364.83 00:13:02.076 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.076 Verification LBA range: start 0x100 length 0x100 00:13:02.076 Malloc0 : 5.51 456.79 28.55 0.00 0.00 276705.32 16920.20 1037136.99 00:13:02.077 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x80 00:13:02.077 Malloc1p0 : 5.79 143.16 8.95 0.00 0.00 845315.37 52190.49 1609087.53 00:13:02.077 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x80 length 0x80 00:13:02.077 Malloc1p0 : 5.56 352.31 22.02 0.00 0.00 353939.19 31457.28 549072.52 00:13:02.077 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x80 00:13:02.077 Malloc1p1 : 5.81 147.68 9.23 0.00 0.00 809681.38 39321.60 1616713.54 00:13:02.077 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x80 length 0x80 00:13:02.077 Malloc1p1 : 5.57 222.89 13.93 0.00 0.00 552901.45 23592.96 1387933.32 00:13:02.077 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x20 00:13:02.077 Malloc2p0 : 5.57 85.08 5.32 0.00 0.00 352307.79 8400.52 648210.62 00:13:02.077 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x20 length 0x20 00:13:02.077 Malloc2p0 : 5.52 89.86 5.62 0.00 0.00 341904.35 5421.61 495690.47 00:13:02.077 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x20 00:13:02.077 Malloc2p1 : 5.57 85.06 5.32 0.00 0.00 350795.20 7804.74 629145.60 00:13:02.077 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x20 length 0x20 00:13:02.077 Malloc2p1 : 5.52 89.84 5.61 0.00 0.00 340841.55 5421.61 484251.46 00:13:02.077 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x20 00:13:02.077 Malloc2p2 : 5.58 84.99 5.31 0.00 0.00 349373.87 7983.48 613893.59 00:13:02.077 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x20 length 0x20 00:13:02.077 Malloc2p2 : 5.52 89.82 5.61 0.00 0.00 339923.80 5213.09 476625.45 00:13:02.077 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x20 00:13:02.077 Malloc2p3 : 5.58 84.97 5.31 0.00 0.00 347930.74 7357.91 598641.57 00:13:02.077 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x20 length 0x20 00:13:02.077 Malloc2p3 : 5.52 89.80 5.61 0.00 0.00 338938.71 5510.98 467092.95 00:13:02.077 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x20 00:13:02.077 Malloc2p4 : 5.58 84.95 5.31 0.00 0.00 346444.61 7804.74 579576.55 00:13:02.077 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x20 length 0x20 00:13:02.077 Malloc2p4 : 5.52 89.78 5.61 0.00 0.00 337969.43 5540.77 457560.44 00:13:02.077 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x20 00:13:02.077 Malloc2p5 : 5.58 84.93 5.31 0.00 0.00 345015.49 6583.39 568137.54 00:13:02.077 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x20 length 0x20 00:13:02.077 Malloc2p5 : 5.53 89.76 5.61 0.00 0.00 336976.78 5481.19 448027.93 00:13:02.077 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x20 00:13:02.077 Malloc2p6 : 5.63 88.03 5.50 0.00 0.00 333390.20 7179.17 552885.53 00:13:02.077 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x20 length 0x20 00:13:02.077 Malloc2p6 : 5.53 89.74 5.61 0.00 0.00 335946.89 5630.14 438495.42 00:13:02.077 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x20 00:13:02.077 Malloc2p7 : 5.64 88.01 5.50 0.00 0.00 332064.22 7208.96 537633.51 00:13:02.077 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x20 length 0x20 00:13:02.077 Malloc2p7 : 5.53 89.72 5.61 0.00 0.00 334934.96 6225.92 427056.41 00:13:02.077 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x100 00:13:02.077 TestPT : 5.89 150.92 9.43 0.00 0.00 748472.99 36461.85 1624339.55 00:13:02.077 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x100 length 0x100 00:13:02.077 TestPT : 5.91 146.29 9.14 0.00 0.00 784378.93 40989.79 1677721.60 00:13:02.077 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x200 00:13:02.077 raid0 : 5.72 161.23 10.08 0.00 0.00 706382.57 35270.28 1624339.55 00:13:02.077 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x200 length 0x200 00:13:02.077 raid0 : 5.62 158.15 9.88 0.00 0.00 745335.92 30980.65 1639591.56 00:13:02.077 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x200 00:13:02.077 concat0 : 5.76 176.64 11.04 0.00 0.00 636058.54 31933.91 1631965.56 00:13:02.077 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x200 length 0x200 00:13:02.077 concat0 : 5.63 163.86 10.24 0.00 0.00 711832.10 33125.47 1647217.57 00:13:02.077 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x100 00:13:02.077 raid1 : 5.81 207.12 12.95 0.00 0.00 536180.16 15192.44 1639591.56 00:13:02.077 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x100 length 0x100 00:13:02.077 raid1 : 5.66 168.88 10.55 0.00 0.00 678503.51 17039.36 1654843.58 00:13:02.077 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x0 length 0x4e 00:13:02.077 AIO0 : 5.84 201.85 12.62 0.00 0.00 330971.52 1266.04 960876.92 00:13:02.077 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:02.077 Verification LBA range: start 0x4e length 0x4e 00:13:02.077 AIO0 : 5.62 171.56 10.72 0.00 0.00 405359.76 6583.39 941811.90 00:13:02.077 =================================================================================================================== 00:13:02.077 Total : 4890.14 305.63 0.00 0.00 467121.37 1266.04 1677721.60 00:13:03.456 ************************************ 00:13:03.456 END TEST bdev_verify_big_io 00:13:03.456 ************************************ 00:13:03.456 00:13:03.456 real 0m9.457s 00:13:03.456 user 0m17.282s 00:13:03.456 sys 0m0.661s 00:13:03.456 12:58:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.456 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:03.456 12:58:22 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.456 12:58:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:03.456 12:58:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.456 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:13:03.456 ************************************ 00:13:03.456 START TEST bdev_write_zeroes 00:13:03.456 ************************************ 00:13:03.456 12:58:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.456 [2024-06-11 12:58:22.252470] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:03.456 [2024-06-11 12:58:22.252845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113200 ] 00:13:03.718 [2024-06-11 12:58:22.419668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.977 [2024-06-11 12:58:22.597327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.236 [2024-06-11 12:58:22.938329] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:04.236 [2024-06-11 12:58:22.938602] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:04.236 [2024-06-11 12:58:22.946320] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:04.236 [2024-06-11 12:58:22.946553] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:04.236 [2024-06-11 12:58:22.954341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:04.236 [2024-06-11 12:58:22.954494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:04.236 [2024-06-11 12:58:22.954617] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:04.495 [2024-06-11 12:58:23.126976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:04.495 [2024-06-11 12:58:23.127291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.495 [2024-06-11 12:58:23.127378] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:04.495 [2024-06-11 12:58:23.127592] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.495 [2024-06-11 12:58:23.129866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.495 [2024-06-11 12:58:23.130050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:04.754 Running I/O for 1 seconds... 00:13:06.131 00:13:06.131 Latency(us) 00:13:06.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.131 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.131 Malloc0 : 1.04 5896.81 23.03 0.00 0.00 21691.62 703.77 37891.72 00:13:06.131 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.131 Malloc1p0 : 1.04 5890.55 23.01 0.00 0.00 21681.55 886.23 37176.79 00:13:06.131 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.131 Malloc1p1 : 1.04 5884.63 22.99 0.00 0.00 21661.38 901.12 36223.53 00:13:06.132 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 Malloc2p0 : 1.05 5878.68 22.96 0.00 0.00 21644.29 871.33 35508.60 00:13:06.132 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 Malloc2p1 : 1.05 5872.36 22.94 0.00 0.00 21626.22 882.50 35031.97 00:13:06.132 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 Malloc2p2 : 1.05 5866.51 22.92 0.00 0.00 21606.89 871.33 34317.03 00:13:06.132 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 Malloc2p3 : 1.05 5860.66 22.89 0.00 0.00 21585.78 837.82 33840.41 00:13:06.132 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 Malloc2p4 : 1.05 5854.84 22.87 0.00 0.00 21573.78 845.27 33125.47 00:13:06.132 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 Malloc2p5 : 1.05 5849.01 22.85 0.00 0.00 21549.15 848.99 32410.53 00:13:06.132 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 Malloc2p6 : 1.05 5843.25 22.83 0.00 0.00 21531.79 834.09 31933.91 00:13:06.132 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 Malloc2p7 : 1.05 5837.46 22.80 0.00 0.00 21515.69 800.58 30980.65 00:13:06.132 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 TestPT : 1.05 5831.66 22.78 0.00 0.00 21497.11 882.50 30146.56 00:13:06.132 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 raid0 : 1.05 5824.98 22.75 0.00 0.00 21468.74 1496.90 28835.84 00:13:06.132 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 concat0 : 1.06 5818.46 22.73 0.00 0.00 21425.96 1347.96 27405.96 00:13:06.132 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 raid1 : 1.06 5810.10 22.70 0.00 0.00 21375.99 2234.18 26214.40 00:13:06.132 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:06.132 AIO0 : 1.06 5790.88 22.62 0.00 0.00 21346.54 1727.77 26214.40 00:13:06.132 =================================================================================================================== 00:13:06.132 Total : 93610.84 365.67 0.00 0.00 21548.94 703.77 37891.72 00:13:07.510 ************************************ 00:13:07.510 END TEST bdev_write_zeroes 00:13:07.510 ************************************ 00:13:07.510 00:13:07.510 real 0m4.122s 00:13:07.510 user 0m3.471s 00:13:07.510 sys 0m0.466s 00:13:07.510 12:58:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.510 12:58:26 -- common/autotest_common.sh@10 -- # set +x 00:13:07.769 12:58:26 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.769 12:58:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:07.769 12:58:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.769 12:58:26 -- common/autotest_common.sh@10 -- # set +x 00:13:07.769 ************************************ 00:13:07.769 START TEST bdev_json_nonenclosed 00:13:07.769 ************************************ 00:13:07.769 12:58:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.769 [2024-06-11 12:58:26.432709] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:07.769 [2024-06-11 12:58:26.433103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113289 ] 00:13:07.769 [2024-06-11 12:58:26.598561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.027 [2024-06-11 12:58:26.768520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.028 [2024-06-11 12:58:26.769006] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:08.028 [2024-06-11 12:58:26.769164] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:08.286 ************************************ 00:13:08.286 END TEST bdev_json_nonenclosed 00:13:08.286 ************************************ 00:13:08.286 00:13:08.286 real 0m0.741s 00:13:08.286 user 0m0.515s 00:13:08.286 sys 0m0.124s 00:13:08.286 12:58:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.286 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.545 12:58:27 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:08.545 12:58:27 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:08.545 12:58:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.545 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.545 ************************************ 00:13:08.545 START TEST bdev_json_nonarray 00:13:08.545 ************************************ 00:13:08.545 12:58:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:08.545 [2024-06-11 12:58:27.230977] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:08.545 [2024-06-11 12:58:27.231373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113320 ] 00:13:08.804 [2024-06-11 12:58:27.398361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.804 [2024-06-11 12:58:27.564288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.804 [2024-06-11 12:58:27.564733] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:08.804 [2024-06-11 12:58:27.564881] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:09.063 ************************************ 00:13:09.063 END TEST bdev_json_nonarray 00:13:09.063 ************************************ 00:13:09.063 00:13:09.063 real 0m0.729s 00:13:09.063 user 0m0.503s 00:13:09.063 sys 0m0.124s 00:13:09.063 12:58:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.063 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:09.321 12:58:27 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:09.321 12:58:27 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:09.321 12:58:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.321 12:58:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.321 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:09.321 ************************************ 00:13:09.321 START TEST bdev_qos 00:13:09.321 ************************************ 00:13:09.321 Process qos testing pid: 113358 00:13:09.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.321 12:58:27 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:13:09.321 12:58:27 -- bdev/blockdev.sh@444 -- # QOS_PID=113358 00:13:09.321 12:58:27 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 113358' 00:13:09.321 12:58:27 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:09.321 12:58:27 -- bdev/blockdev.sh@447 -- # waitforlisten 113358 00:13:09.321 12:58:27 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:09.321 12:58:27 -- common/autotest_common.sh@819 -- # '[' -z 113358 ']' 00:13:09.321 12:58:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.321 12:58:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.321 12:58:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.321 12:58:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.322 12:58:27 -- common/autotest_common.sh@10 -- # set +x 00:13:09.322 [2024-06-11 12:58:28.013125] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:09.322 [2024-06-11 12:58:28.013636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113358 ] 00:13:09.580 [2024-06-11 12:58:28.183313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.839 [2024-06-11 12:58:28.428808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.406 12:58:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.406 12:58:28 -- common/autotest_common.sh@852 -- # return 0 00:13:10.406 12:58:28 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:10.406 12:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.406 12:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:10.406 Malloc_0 00:13:10.406 12:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.406 12:58:29 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:10.406 12:58:29 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:13:10.406 12:58:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:10.406 12:58:29 -- common/autotest_common.sh@889 -- # local i 00:13:10.406 12:58:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:10.406 12:58:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:10.406 12:58:29 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:10.406 12:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.406 12:58:29 -- common/autotest_common.sh@10 -- # set +x 00:13:10.407 12:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.407 12:58:29 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:10.407 12:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.407 12:58:29 -- common/autotest_common.sh@10 -- # set +x 00:13:10.407 [ 00:13:10.407 { 00:13:10.407 "name": "Malloc_0", 00:13:10.407 "aliases": [ 00:13:10.407 "ff0c8d66-dd1a-4fc0-ad55-3d238f950035" 00:13:10.407 ], 00:13:10.407 "product_name": "Malloc disk", 00:13:10.407 "block_size": 512, 00:13:10.407 "num_blocks": 262144, 00:13:10.407 "uuid": "ff0c8d66-dd1a-4fc0-ad55-3d238f950035", 00:13:10.407 "assigned_rate_limits": { 00:13:10.407 "rw_ios_per_sec": 0, 00:13:10.407 "rw_mbytes_per_sec": 0, 00:13:10.407 "r_mbytes_per_sec": 0, 00:13:10.407 "w_mbytes_per_sec": 0 00:13:10.407 }, 00:13:10.407 "claimed": false, 00:13:10.407 "zoned": false, 00:13:10.407 "supported_io_types": { 00:13:10.407 "read": true, 00:13:10.407 "write": true, 00:13:10.407 "unmap": true, 00:13:10.407 "write_zeroes": true, 00:13:10.407 "flush": true, 00:13:10.407 "reset": true, 00:13:10.407 "compare": false, 00:13:10.407 "compare_and_write": false, 00:13:10.407 "abort": true, 00:13:10.407 "nvme_admin": false, 00:13:10.407 "nvme_io": false 00:13:10.407 }, 00:13:10.407 "memory_domains": [ 00:13:10.407 { 00:13:10.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.407 "dma_device_type": 2 00:13:10.407 } 00:13:10.407 ], 00:13:10.407 "driver_specific": {} 00:13:10.407 } 00:13:10.407 ] 00:13:10.407 12:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.407 12:58:29 -- common/autotest_common.sh@895 -- # return 0 00:13:10.407 12:58:29 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:10.407 12:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.407 12:58:29 -- common/autotest_common.sh@10 -- # set +x 00:13:10.407 Null_1 00:13:10.407 12:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.407 12:58:29 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:10.407 12:58:29 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:13:10.407 12:58:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:10.407 12:58:29 -- common/autotest_common.sh@889 -- # local i 00:13:10.407 12:58:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:10.407 12:58:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:10.407 12:58:29 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:10.407 12:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.407 12:58:29 -- common/autotest_common.sh@10 -- # set +x 00:13:10.407 12:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.407 12:58:29 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:10.407 12:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.407 12:58:29 -- common/autotest_common.sh@10 -- # set +x 00:13:10.407 [ 00:13:10.407 { 00:13:10.407 "name": "Null_1", 00:13:10.407 "aliases": [ 00:13:10.407 "1394933b-6478-4ae8-a9b2-596684aace29" 00:13:10.407 ], 00:13:10.407 "product_name": "Null disk", 00:13:10.407 "block_size": 512, 00:13:10.407 "num_blocks": 262144, 00:13:10.407 "uuid": "1394933b-6478-4ae8-a9b2-596684aace29", 00:13:10.407 "assigned_rate_limits": { 00:13:10.407 "rw_ios_per_sec": 0, 00:13:10.407 "rw_mbytes_per_sec": 0, 00:13:10.407 "r_mbytes_per_sec": 0, 00:13:10.407 "w_mbytes_per_sec": 0 00:13:10.407 }, 00:13:10.407 "claimed": false, 00:13:10.407 "zoned": false, 00:13:10.407 "supported_io_types": { 00:13:10.407 "read": true, 00:13:10.407 "write": true, 00:13:10.407 "unmap": false, 00:13:10.407 "write_zeroes": true, 00:13:10.407 "flush": false, 00:13:10.407 "reset": true, 00:13:10.407 "compare": false, 00:13:10.407 "compare_and_write": false, 00:13:10.407 "abort": true, 00:13:10.407 "nvme_admin": false, 00:13:10.407 "nvme_io": false 00:13:10.407 }, 00:13:10.407 "driver_specific": {} 00:13:10.407 } 00:13:10.407 ] 00:13:10.407 12:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.407 12:58:29 -- common/autotest_common.sh@895 -- # return 0 00:13:10.407 12:58:29 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:10.407 12:58:29 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:10.407 12:58:29 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:10.407 12:58:29 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:10.407 12:58:29 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:10.407 12:58:29 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:10.407 12:58:29 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:10.407 12:58:29 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:10.407 12:58:29 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:10.407 12:58:29 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:10.407 12:58:29 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:10.407 12:58:29 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:10.407 12:58:29 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:10.407 12:58:29 -- bdev/blockdev.sh@376 -- # tail -1 00:13:10.666 Running I/O for 60 seconds... 00:13:15.950 12:58:34 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 81496.56 325986.25 0.00 0.00 329728.00 0.00 0.00 ' 00:13:15.950 12:58:34 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:15.950 12:58:34 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:15.950 12:58:34 -- bdev/blockdev.sh@378 -- # iostat_result=81496.56 00:13:15.950 12:58:34 -- bdev/blockdev.sh@383 -- # echo 81496 00:13:15.950 12:58:34 -- bdev/blockdev.sh@414 -- # io_result=81496 00:13:15.950 12:58:34 -- bdev/blockdev.sh@416 -- # iops_limit=20000 00:13:15.950 12:58:34 -- bdev/blockdev.sh@417 -- # '[' 20000 -gt 1000 ']' 00:13:15.950 12:58:34 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0 00:13:15.950 12:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.950 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:15.950 12:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.950 12:58:34 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0 00:13:15.950 12:58:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:15.950 12:58:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.950 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:13:15.950 ************************************ 00:13:15.950 START TEST bdev_qos_iops 00:13:15.950 ************************************ 00:13:15.950 12:58:34 -- common/autotest_common.sh@1104 -- # run_qos_test 20000 IOPS Malloc_0 00:13:15.950 12:58:34 -- bdev/blockdev.sh@387 -- # local qos_limit=20000 00:13:15.950 12:58:34 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:15.950 12:58:34 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:15.950 12:58:34 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:15.950 12:58:34 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:15.950 12:58:34 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:15.950 12:58:34 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:15.950 12:58:34 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:15.950 12:58:34 -- bdev/blockdev.sh@376 -- # tail -1 00:13:21.216 12:58:39 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 20000.94 80003.77 0.00 0.00 81040.00 0.00 0.00 ' 00:13:21.217 12:58:39 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:21.217 12:58:39 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:21.217 12:58:39 -- bdev/blockdev.sh@378 -- # iostat_result=20000.94 00:13:21.217 12:58:39 -- bdev/blockdev.sh@383 -- # echo 20000 00:13:21.217 ************************************ 00:13:21.217 END TEST bdev_qos_iops 00:13:21.217 ************************************ 00:13:21.217 12:58:39 -- bdev/blockdev.sh@390 -- # qos_result=20000 00:13:21.217 12:58:39 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:21.217 12:58:39 -- bdev/blockdev.sh@394 -- # lower_limit=18000 00:13:21.217 12:58:39 -- bdev/blockdev.sh@395 -- # upper_limit=22000 00:13:21.217 12:58:39 -- bdev/blockdev.sh@398 -- # '[' 20000 -lt 18000 ']' 00:13:21.217 12:58:39 -- bdev/blockdev.sh@398 -- # '[' 20000 -gt 22000 ']' 00:13:21.217 00:13:21.217 real 0m5.184s 00:13:21.217 user 0m0.104s 00:13:21.217 sys 0m0.026s 00:13:21.217 12:58:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.217 12:58:39 -- common/autotest_common.sh@10 -- # set +x 00:13:21.217 12:58:39 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:21.217 12:58:39 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:21.217 12:58:39 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:21.217 12:58:39 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:21.217 12:58:39 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:21.217 12:58:39 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:21.217 12:58:39 -- bdev/blockdev.sh@376 -- # tail -1 00:13:26.481 12:58:44 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 31505.25 126021.01 0.00 0.00 128000.00 0.00 0.00 ' 00:13:26.481 12:58:44 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:26.481 12:58:44 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:26.481 12:58:44 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:26.481 12:58:44 -- bdev/blockdev.sh@380 -- # iostat_result=128000.00 00:13:26.481 12:58:44 -- bdev/blockdev.sh@383 -- # echo 128000 00:13:26.481 12:58:44 -- bdev/blockdev.sh@425 -- # bw_limit=128000 00:13:26.481 12:58:44 -- bdev/blockdev.sh@426 -- # bw_limit=12 00:13:26.481 12:58:44 -- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']' 00:13:26.481 12:58:44 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:13:26.481 12:58:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.481 12:58:44 -- common/autotest_common.sh@10 -- # set +x 00:13:26.481 12:58:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.481 12:58:44 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:13:26.481 12:58:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:26.481 12:58:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:26.481 12:58:44 -- common/autotest_common.sh@10 -- # set +x 00:13:26.481 ************************************ 00:13:26.481 START TEST bdev_qos_bw 00:13:26.481 ************************************ 00:13:26.481 12:58:44 -- common/autotest_common.sh@1104 -- # run_qos_test 12 BANDWIDTH Null_1 00:13:26.481 12:58:44 -- bdev/blockdev.sh@387 -- # local qos_limit=12 00:13:26.481 12:58:44 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:26.481 12:58:44 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:26.481 12:58:44 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:26.481 12:58:44 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:26.481 12:58:44 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:26.481 12:58:44 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:26.481 12:58:44 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:26.481 12:58:44 -- bdev/blockdev.sh@376 -- # tail -1 00:13:31.776 12:58:50 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 3070.31 12281.23 0.00 0.00 12580.00 0.00 0.00 ' 00:13:31.776 12:58:50 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:31.776 12:58:50 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:31.776 12:58:50 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:31.776 12:58:50 -- bdev/blockdev.sh@380 -- # iostat_result=12580.00 00:13:31.776 12:58:50 -- bdev/blockdev.sh@383 -- # echo 12580 00:13:31.776 ************************************ 00:13:31.776 END TEST bdev_qos_bw 00:13:31.776 ************************************ 00:13:31.776 12:58:50 -- bdev/blockdev.sh@390 -- # qos_result=12580 00:13:31.776 12:58:50 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:31.776 12:58:50 -- bdev/blockdev.sh@392 -- # qos_limit=12288 00:13:31.776 12:58:50 -- bdev/blockdev.sh@394 -- # lower_limit=11059 00:13:31.776 12:58:50 -- bdev/blockdev.sh@395 -- # upper_limit=13516 00:13:31.776 12:58:50 -- bdev/blockdev.sh@398 -- # '[' 12580 -lt 11059 ']' 00:13:31.776 12:58:50 -- bdev/blockdev.sh@398 -- # '[' 12580 -gt 13516 ']' 00:13:31.776 00:13:31.776 real 0m5.241s 00:13:31.776 user 0m0.121s 00:13:31.776 sys 0m0.015s 00:13:31.776 12:58:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.776 12:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.776 12:58:50 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:31.777 12:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.777 12:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.777 12:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.777 12:58:50 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:31.777 12:58:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:31.777 12:58:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.777 12:58:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.777 ************************************ 00:13:31.777 START TEST bdev_qos_ro_bw 00:13:31.777 ************************************ 00:13:31.777 12:58:50 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:31.777 12:58:50 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:31.777 12:58:50 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:31.777 12:58:50 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:31.777 12:58:50 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:31.777 12:58:50 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:31.777 12:58:50 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:31.777 12:58:50 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:31.777 12:58:50 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:31.777 12:58:50 -- bdev/blockdev.sh@376 -- # tail -1 00:13:37.043 12:58:55 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.98 2047.92 0.00 0.00 2068.00 0.00 0.00 ' 00:13:37.043 12:58:55 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:37.043 12:58:55 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:37.043 12:58:55 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:37.043 12:58:55 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:13:37.043 12:58:55 -- bdev/blockdev.sh@383 -- # echo 2068 00:13:37.043 ************************************ 00:13:37.043 END TEST bdev_qos_ro_bw 00:13:37.043 ************************************ 00:13:37.043 12:58:55 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:13:37.043 12:58:55 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:37.043 12:58:55 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:37.043 12:58:55 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:37.043 12:58:55 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:37.043 12:58:55 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:13:37.043 12:58:55 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:13:37.043 00:13:37.043 real 0m5.160s 00:13:37.043 user 0m0.113s 00:13:37.043 sys 0m0.017s 00:13:37.043 12:58:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.043 12:58:55 -- common/autotest_common.sh@10 -- # set +x 00:13:37.043 12:58:55 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:37.043 12:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.043 12:58:55 -- common/autotest_common.sh@10 -- # set +x 00:13:37.301 12:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.301 12:58:55 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:37.301 12:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.301 12:58:55 -- common/autotest_common.sh@10 -- # set +x 00:13:37.301 00:13:37.301 Latency(us) 00:13:37.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.301 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:37.301 Malloc_0 : 26.56 27614.07 107.87 0.00 0.00 9185.58 1794.79 503316.48 00:13:37.301 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:37.301 Null_1 : 26.75 28769.86 112.38 0.00 0.00 8879.79 618.12 185883.93 00:13:37.301 =================================================================================================================== 00:13:37.301 Total : 56383.93 220.25 0.00 0.00 9029.01 618.12 503316.48 00:13:37.301 0 00:13:37.301 12:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.301 12:58:56 -- bdev/blockdev.sh@459 -- # killprocess 113358 00:13:37.301 12:58:56 -- common/autotest_common.sh@926 -- # '[' -z 113358 ']' 00:13:37.301 12:58:56 -- common/autotest_common.sh@930 -- # kill -0 113358 00:13:37.301 12:58:56 -- common/autotest_common.sh@931 -- # uname 00:13:37.301 12:58:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:37.301 12:58:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113358 00:13:37.301 killing process with pid 113358 00:13:37.301 Received shutdown signal, test time was about 26.779189 seconds 00:13:37.301 00:13:37.301 Latency(us) 00:13:37.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.301 =================================================================================================================== 00:13:37.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.301 12:58:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:37.301 12:58:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:37.301 12:58:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113358' 00:13:37.301 12:58:56 -- common/autotest_common.sh@945 -- # kill 113358 00:13:37.301 12:58:56 -- common/autotest_common.sh@950 -- # wait 113358 00:13:38.674 ************************************ 00:13:38.674 END TEST bdev_qos 00:13:38.674 ************************************ 00:13:38.674 12:58:57 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:38.674 00:13:38.674 real 0m29.280s 00:13:38.674 user 0m30.010s 00:13:38.674 sys 0m0.572s 00:13:38.674 12:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.674 12:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.674 12:58:57 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:38.674 12:58:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:38.674 12:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.674 12:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.674 ************************************ 00:13:38.674 START TEST bdev_qd_sampling 00:13:38.674 ************************************ 00:13:38.674 12:58:57 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:13:38.674 12:58:57 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:38.674 12:58:57 -- bdev/blockdev.sh@539 -- # QD_PID=113886 00:13:38.674 12:58:57 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 113886' 00:13:38.674 12:58:57 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:38.674 Process bdev QD sampling period testing pid: 113886 00:13:38.674 12:58:57 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:38.674 12:58:57 -- bdev/blockdev.sh@542 -- # waitforlisten 113886 00:13:38.674 12:58:57 -- common/autotest_common.sh@819 -- # '[' -z 113886 ']' 00:13:38.674 12:58:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.675 12:58:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:38.675 12:58:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.675 12:58:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:38.675 12:58:57 -- common/autotest_common.sh@10 -- # set +x 00:13:38.675 [2024-06-11 12:58:57.355367] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:38.675 [2024-06-11 12:58:57.355739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113886 ] 00:13:38.933 [2024-06-11 12:58:57.532674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.933 [2024-06-11 12:58:57.765036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.933 [2024-06-11 12:58:57.765038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.499 12:58:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.499 12:58:58 -- common/autotest_common.sh@852 -- # return 0 00:13:39.499 12:58:58 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:39.499 12:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.499 12:58:58 -- common/autotest_common.sh@10 -- # set +x 00:13:39.757 Malloc_QD 00:13:39.757 12:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.757 12:58:58 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:39.757 12:58:58 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:13:39.757 12:58:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:39.757 12:58:58 -- common/autotest_common.sh@889 -- # local i 00:13:39.757 12:58:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:39.757 12:58:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:39.757 12:58:58 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:39.757 12:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.757 12:58:58 -- common/autotest_common.sh@10 -- # set +x 00:13:39.757 12:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.757 12:58:58 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:39.757 12:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.757 12:58:58 -- common/autotest_common.sh@10 -- # set +x 00:13:39.757 [ 00:13:39.757 { 00:13:39.757 "name": "Malloc_QD", 00:13:39.757 "aliases": [ 00:13:39.757 "15fbcaeb-b153-448d-8649-b490b9bca799" 00:13:39.757 ], 00:13:39.757 "product_name": "Malloc disk", 00:13:39.757 "block_size": 512, 00:13:39.757 "num_blocks": 262144, 00:13:39.757 "uuid": "15fbcaeb-b153-448d-8649-b490b9bca799", 00:13:39.757 "assigned_rate_limits": { 00:13:39.757 "rw_ios_per_sec": 0, 00:13:39.757 "rw_mbytes_per_sec": 0, 00:13:39.757 "r_mbytes_per_sec": 0, 00:13:39.757 "w_mbytes_per_sec": 0 00:13:39.757 }, 00:13:39.757 "claimed": false, 00:13:39.757 "zoned": false, 00:13:39.757 "supported_io_types": { 00:13:39.757 "read": true, 00:13:39.757 "write": true, 00:13:39.757 "unmap": true, 00:13:39.757 "write_zeroes": true, 00:13:39.757 "flush": true, 00:13:39.757 "reset": true, 00:13:39.757 "compare": false, 00:13:39.757 "compare_and_write": false, 00:13:39.757 "abort": true, 00:13:39.757 "nvme_admin": false, 00:13:39.757 "nvme_io": false 00:13:39.757 }, 00:13:39.757 "memory_domains": [ 00:13:39.757 { 00:13:39.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.757 "dma_device_type": 2 00:13:39.757 } 00:13:39.757 ], 00:13:39.757 "driver_specific": {} 00:13:39.757 } 00:13:39.757 ] 00:13:39.757 12:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.757 12:58:58 -- common/autotest_common.sh@895 -- # return 0 00:13:39.757 12:58:58 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:39.757 12:58:58 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:39.757 Running I/O for 5 seconds... 00:13:41.681 12:59:00 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:41.681 12:59:00 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:41.682 12:59:00 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:41.682 12:59:00 -- bdev/blockdev.sh@519 -- # local iostats 00:13:41.682 12:59:00 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:41.682 12:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.682 12:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:41.682 12:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.682 12:59:00 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:41.682 12:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.682 12:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:41.682 12:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.682 12:59:00 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:41.682 "tick_rate": 2200000000, 00:13:41.682 "ticks": 1721210513668, 00:13:41.682 "bdevs": [ 00:13:41.682 { 00:13:41.682 "name": "Malloc_QD", 00:13:41.682 "bytes_read": 973115904, 00:13:41.682 "num_read_ops": 237571, 00:13:41.682 "bytes_written": 0, 00:13:41.682 "num_write_ops": 0, 00:13:41.682 "bytes_unmapped": 0, 00:13:41.682 "num_unmap_ops": 0, 00:13:41.682 "bytes_copied": 0, 00:13:41.682 "num_copy_ops": 0, 00:13:41.682 "read_latency_ticks": 2175721562184, 00:13:41.682 "max_read_latency_ticks": 12755048, 00:13:41.682 "min_read_latency_ticks": 307036, 00:13:41.682 "write_latency_ticks": 0, 00:13:41.682 "max_write_latency_ticks": 0, 00:13:41.682 "min_write_latency_ticks": 0, 00:13:41.682 "unmap_latency_ticks": 0, 00:13:41.682 "max_unmap_latency_ticks": 0, 00:13:41.682 "min_unmap_latency_ticks": 0, 00:13:41.682 "copy_latency_ticks": 0, 00:13:41.682 "max_copy_latency_ticks": 0, 00:13:41.682 "min_copy_latency_ticks": 0, 00:13:41.682 "io_error": {}, 00:13:41.682 "queue_depth_polling_period": 10, 00:13:41.682 "queue_depth": 512, 00:13:41.682 "io_time": 30, 00:13:41.682 "weighted_io_time": 15360 00:13:41.682 } 00:13:41.682 ] 00:13:41.682 }' 00:13:41.682 12:59:00 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:41.682 12:59:00 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:41.682 12:59:00 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:41.682 12:59:00 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:41.682 12:59:00 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:41.682 12:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.682 12:59:00 -- common/autotest_common.sh@10 -- # set +x 00:13:41.682 00:13:41.682 Latency(us) 00:13:41.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.682 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:41.682 Malloc_QD : 2.00 60080.24 234.69 0.00 0.00 4250.12 1020.28 5808.87 00:13:41.682 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:41.682 Malloc_QD : 2.01 62690.80 244.89 0.00 0.00 4074.48 752.17 4438.57 00:13:41.682 =================================================================================================================== 00:13:41.682 Total : 122771.05 479.57 0.00 0.00 4160.38 752.17 5808.87 00:13:41.940 0 00:13:41.940 12:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.940 12:59:00 -- bdev/blockdev.sh@552 -- # killprocess 113886 00:13:41.940 12:59:00 -- common/autotest_common.sh@926 -- # '[' -z 113886 ']' 00:13:41.940 12:59:00 -- common/autotest_common.sh@930 -- # kill -0 113886 00:13:41.940 12:59:00 -- common/autotest_common.sh@931 -- # uname 00:13:41.940 12:59:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:41.940 12:59:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113886 00:13:41.940 12:59:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:41.940 12:59:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:41.940 12:59:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113886' 00:13:41.940 killing process with pid 113886 00:13:41.940 12:59:00 -- common/autotest_common.sh@945 -- # kill 113886 00:13:41.940 Received shutdown signal, test time was about 2.144140 seconds 00:13:41.940 00:13:41.941 Latency(us) 00:13:41.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.941 =================================================================================================================== 00:13:41.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:41.941 12:59:00 -- common/autotest_common.sh@950 -- # wait 113886 00:13:43.315 ************************************ 00:13:43.315 END TEST bdev_qd_sampling 00:13:43.315 ************************************ 00:13:43.315 12:59:01 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:43.315 00:13:43.315 real 0m4.563s 00:13:43.315 user 0m8.347s 00:13:43.315 sys 0m0.400s 00:13:43.315 12:59:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.315 12:59:01 -- common/autotest_common.sh@10 -- # set +x 00:13:43.315 12:59:01 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:43.315 12:59:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:43.315 12:59:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.315 12:59:01 -- common/autotest_common.sh@10 -- # set +x 00:13:43.315 ************************************ 00:13:43.315 START TEST bdev_error 00:13:43.315 ************************************ 00:13:43.315 12:59:01 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:13:43.315 12:59:01 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:43.315 12:59:01 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:43.315 12:59:01 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:43.315 12:59:01 -- bdev/blockdev.sh@470 -- # ERR_PID=113973 00:13:43.315 12:59:01 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 113973' 00:13:43.315 12:59:01 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:43.315 Process error testing pid: 113973 00:13:43.315 12:59:01 -- bdev/blockdev.sh@472 -- # waitforlisten 113973 00:13:43.315 12:59:01 -- common/autotest_common.sh@819 -- # '[' -z 113973 ']' 00:13:43.315 12:59:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.315 12:59:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:43.315 12:59:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.315 12:59:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:43.315 12:59:01 -- common/autotest_common.sh@10 -- # set +x 00:13:43.315 [2024-06-11 12:59:01.975260] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:43.315 [2024-06-11 12:59:01.975480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113973 ] 00:13:43.315 [2024-06-11 12:59:02.142639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.573 [2024-06-11 12:59:02.342838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.140 12:59:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:44.140 12:59:02 -- common/autotest_common.sh@852 -- # return 0 00:13:44.140 12:59:02 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:44.140 12:59:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.140 12:59:02 -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 Dev_1 00:13:44.140 12:59:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.140 12:59:02 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:44.140 12:59:02 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:44.140 12:59:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:44.140 12:59:02 -- common/autotest_common.sh@889 -- # local i 00:13:44.140 12:59:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:44.140 12:59:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:44.140 12:59:02 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:44.140 12:59:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.140 12:59:02 -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 12:59:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.140 12:59:02 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:44.140 12:59:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.140 12:59:02 -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 [ 00:13:44.140 { 00:13:44.140 "name": "Dev_1", 00:13:44.140 "aliases": [ 00:13:44.140 "0753dd41-b5fc-488b-a7e8-903fdf85482b" 00:13:44.140 ], 00:13:44.140 "product_name": "Malloc disk", 00:13:44.140 "block_size": 512, 00:13:44.140 "num_blocks": 262144, 00:13:44.140 "uuid": "0753dd41-b5fc-488b-a7e8-903fdf85482b", 00:13:44.140 "assigned_rate_limits": { 00:13:44.140 "rw_ios_per_sec": 0, 00:13:44.140 "rw_mbytes_per_sec": 0, 00:13:44.140 "r_mbytes_per_sec": 0, 00:13:44.140 "w_mbytes_per_sec": 0 00:13:44.140 }, 00:13:44.140 "claimed": false, 00:13:44.140 "zoned": false, 00:13:44.140 "supported_io_types": { 00:13:44.140 "read": true, 00:13:44.140 "write": true, 00:13:44.140 "unmap": true, 00:13:44.140 "write_zeroes": true, 00:13:44.140 "flush": true, 00:13:44.140 "reset": true, 00:13:44.140 "compare": false, 00:13:44.140 "compare_and_write": false, 00:13:44.140 "abort": true, 00:13:44.140 "nvme_admin": false, 00:13:44.140 "nvme_io": false 00:13:44.140 }, 00:13:44.140 "memory_domains": [ 00:13:44.140 { 00:13:44.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.140 "dma_device_type": 2 00:13:44.140 } 00:13:44.140 ], 00:13:44.140 "driver_specific": {} 00:13:44.140 } 00:13:44.140 ] 00:13:44.140 12:59:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.140 12:59:02 -- common/autotest_common.sh@895 -- # return 0 00:13:44.140 12:59:02 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:44.140 12:59:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.140 12:59:02 -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 true 00:13:44.140 12:59:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.140 12:59:02 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:44.140 12:59:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.140 12:59:02 -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 Dev_2 00:13:44.399 12:59:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.399 12:59:03 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:44.399 12:59:03 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:44.399 12:59:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:44.399 12:59:03 -- common/autotest_common.sh@889 -- # local i 00:13:44.399 12:59:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:44.399 12:59:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:44.399 12:59:03 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:44.399 12:59:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.399 12:59:03 -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 12:59:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.399 12:59:03 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:44.399 12:59:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.399 12:59:03 -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 [ 00:13:44.399 { 00:13:44.399 "name": "Dev_2", 00:13:44.399 "aliases": [ 00:13:44.399 "a72e80da-a8e1-48bc-8eb6-8e582e0462c9" 00:13:44.399 ], 00:13:44.399 "product_name": "Malloc disk", 00:13:44.399 "block_size": 512, 00:13:44.399 "num_blocks": 262144, 00:13:44.399 "uuid": "a72e80da-a8e1-48bc-8eb6-8e582e0462c9", 00:13:44.399 "assigned_rate_limits": { 00:13:44.399 "rw_ios_per_sec": 0, 00:13:44.399 "rw_mbytes_per_sec": 0, 00:13:44.399 "r_mbytes_per_sec": 0, 00:13:44.399 "w_mbytes_per_sec": 0 00:13:44.399 }, 00:13:44.399 "claimed": false, 00:13:44.399 "zoned": false, 00:13:44.399 "supported_io_types": { 00:13:44.399 "read": true, 00:13:44.399 "write": true, 00:13:44.399 "unmap": true, 00:13:44.399 "write_zeroes": true, 00:13:44.399 "flush": true, 00:13:44.399 "reset": true, 00:13:44.399 "compare": false, 00:13:44.399 "compare_and_write": false, 00:13:44.399 "abort": true, 00:13:44.399 "nvme_admin": false, 00:13:44.399 "nvme_io": false 00:13:44.399 }, 00:13:44.399 "memory_domains": [ 00:13:44.399 { 00:13:44.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.399 "dma_device_type": 2 00:13:44.399 } 00:13:44.399 ], 00:13:44.399 "driver_specific": {} 00:13:44.399 } 00:13:44.399 ] 00:13:44.399 12:59:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.399 12:59:03 -- common/autotest_common.sh@895 -- # return 0 00:13:44.399 12:59:03 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:44.399 12:59:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.399 12:59:03 -- common/autotest_common.sh@10 -- # set +x 00:13:44.399 12:59:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.399 12:59:03 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:44.399 12:59:03 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:44.399 Running I/O for 5 seconds... 00:13:45.334 Process is existed as continue on error is set. Pid: 113973 00:13:45.334 12:59:04 -- bdev/blockdev.sh@485 -- # kill -0 113973 00:13:45.334 12:59:04 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 113973' 00:13:45.334 12:59:04 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:45.334 12:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.334 12:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.334 12:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.334 12:59:04 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:45.334 12:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.334 12:59:04 -- common/autotest_common.sh@10 -- # set +x 00:13:45.592 Timeout while waiting for response: 00:13:45.592 00:13:45.592 00:13:45.592 12:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.592 12:59:04 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:49.775 00:13:49.775 Latency(us) 00:13:49.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.775 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:49.775 EE_Dev_1 : 0.91 46848.48 183.00 5.50 0.00 339.10 124.74 595.78 00:13:49.775 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:49.775 Dev_2 : 5.00 95996.97 374.99 0.00 0.00 164.32 54.46 278349.27 00:13:49.775 =================================================================================================================== 00:13:49.775 Total : 142845.44 557.99 5.50 0.00 178.56 54.46 278349.27 00:13:50.707 12:59:09 -- bdev/blockdev.sh@497 -- # killprocess 113973 00:13:50.707 12:59:09 -- common/autotest_common.sh@926 -- # '[' -z 113973 ']' 00:13:50.707 12:59:09 -- common/autotest_common.sh@930 -- # kill -0 113973 00:13:50.707 12:59:09 -- common/autotest_common.sh@931 -- # uname 00:13:50.707 12:59:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:50.707 12:59:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113973 00:13:50.707 12:59:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:50.707 12:59:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:50.707 killing process with pid 113973 00:13:50.707 Received shutdown signal, test time was about 5.000000 seconds 00:13:50.707 00:13:50.707 Latency(us) 00:13:50.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.707 =================================================================================================================== 00:13:50.707 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.707 12:59:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113973' 00:13:50.707 12:59:09 -- common/autotest_common.sh@945 -- # kill 113973 00:13:50.707 12:59:09 -- common/autotest_common.sh@950 -- # wait 113973 00:13:52.081 12:59:10 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:52.081 12:59:10 -- bdev/blockdev.sh@501 -- # ERR_PID=114113 00:13:52.081 12:59:10 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 114113' 00:13:52.081 Process error testing pid: 114113 00:13:52.081 12:59:10 -- bdev/blockdev.sh@503 -- # waitforlisten 114113 00:13:52.081 12:59:10 -- common/autotest_common.sh@819 -- # '[' -z 114113 ']' 00:13:52.081 12:59:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.081 12:59:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:52.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.081 12:59:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.081 12:59:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:52.081 12:59:10 -- common/autotest_common.sh@10 -- # set +x 00:13:52.081 [2024-06-11 12:59:10.801138] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:52.081 [2024-06-11 12:59:10.801318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114113 ] 00:13:52.339 [2024-06-11 12:59:10.947836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.339 [2024-06-11 12:59:11.131012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.905 12:59:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.905 12:59:11 -- common/autotest_common.sh@852 -- # return 0 00:13:52.905 12:59:11 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:52.905 12:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.905 12:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.165 Dev_1 00:13:53.165 12:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.165 12:59:11 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:53.165 12:59:11 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:53.165 12:59:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:53.165 12:59:11 -- common/autotest_common.sh@889 -- # local i 00:13:53.165 12:59:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:53.165 12:59:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:53.165 12:59:11 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:53.165 12:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.165 12:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.165 12:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.165 12:59:11 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:53.165 12:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.165 12:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.165 [ 00:13:53.165 { 00:13:53.165 "name": "Dev_1", 00:13:53.165 "aliases": [ 00:13:53.165 "1ce14c83-c0e4-4ccd-8eed-fd0955dfdc10" 00:13:53.165 ], 00:13:53.165 "product_name": "Malloc disk", 00:13:53.165 "block_size": 512, 00:13:53.165 "num_blocks": 262144, 00:13:53.165 "uuid": "1ce14c83-c0e4-4ccd-8eed-fd0955dfdc10", 00:13:53.165 "assigned_rate_limits": { 00:13:53.165 "rw_ios_per_sec": 0, 00:13:53.165 "rw_mbytes_per_sec": 0, 00:13:53.165 "r_mbytes_per_sec": 0, 00:13:53.165 "w_mbytes_per_sec": 0 00:13:53.165 }, 00:13:53.165 "claimed": false, 00:13:53.165 "zoned": false, 00:13:53.165 "supported_io_types": { 00:13:53.165 "read": true, 00:13:53.165 "write": true, 00:13:53.165 "unmap": true, 00:13:53.165 "write_zeroes": true, 00:13:53.165 "flush": true, 00:13:53.165 "reset": true, 00:13:53.165 "compare": false, 00:13:53.165 "compare_and_write": false, 00:13:53.165 "abort": true, 00:13:53.165 "nvme_admin": false, 00:13:53.165 "nvme_io": false 00:13:53.165 }, 00:13:53.165 "memory_domains": [ 00:13:53.165 { 00:13:53.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.165 "dma_device_type": 2 00:13:53.165 } 00:13:53.165 ], 00:13:53.165 "driver_specific": {} 00:13:53.165 } 00:13:53.165 ] 00:13:53.165 12:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.165 12:59:11 -- common/autotest_common.sh@895 -- # return 0 00:13:53.165 12:59:11 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:53.165 12:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.165 12:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.165 true 00:13:53.165 12:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.165 12:59:11 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:53.165 12:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.165 12:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.165 Dev_2 00:13:53.165 12:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.165 12:59:11 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:53.165 12:59:11 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:53.165 12:59:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:53.165 12:59:11 -- common/autotest_common.sh@889 -- # local i 00:13:53.165 12:59:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:53.165 12:59:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:53.165 12:59:11 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:53.165 12:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.165 12:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.165 12:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.165 12:59:11 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:53.165 12:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.165 12:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.165 [ 00:13:53.165 { 00:13:53.165 "name": "Dev_2", 00:13:53.165 "aliases": [ 00:13:53.165 "c306a39b-dfec-4a4f-81f0-934a216a8fd3" 00:13:53.165 ], 00:13:53.165 "product_name": "Malloc disk", 00:13:53.165 "block_size": 512, 00:13:53.165 "num_blocks": 262144, 00:13:53.165 "uuid": "c306a39b-dfec-4a4f-81f0-934a216a8fd3", 00:13:53.165 "assigned_rate_limits": { 00:13:53.165 "rw_ios_per_sec": 0, 00:13:53.165 "rw_mbytes_per_sec": 0, 00:13:53.165 "r_mbytes_per_sec": 0, 00:13:53.165 "w_mbytes_per_sec": 0 00:13:53.165 }, 00:13:53.165 "claimed": false, 00:13:53.165 "zoned": false, 00:13:53.165 "supported_io_types": { 00:13:53.165 "read": true, 00:13:53.165 "write": true, 00:13:53.165 "unmap": true, 00:13:53.165 "write_zeroes": true, 00:13:53.165 "flush": true, 00:13:53.165 "reset": true, 00:13:53.165 "compare": false, 00:13:53.165 "compare_and_write": false, 00:13:53.165 "abort": true, 00:13:53.165 "nvme_admin": false, 00:13:53.166 "nvme_io": false 00:13:53.166 }, 00:13:53.166 "memory_domains": [ 00:13:53.166 { 00:13:53.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.166 "dma_device_type": 2 00:13:53.166 } 00:13:53.166 ], 00:13:53.166 "driver_specific": {} 00:13:53.166 } 00:13:53.166 ] 00:13:53.166 12:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.166 12:59:11 -- common/autotest_common.sh@895 -- # return 0 00:13:53.166 12:59:11 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:53.166 12:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.166 12:59:11 -- common/autotest_common.sh@10 -- # set +x 00:13:53.166 12:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.166 12:59:11 -- bdev/blockdev.sh@513 -- # NOT wait 114113 00:13:53.166 12:59:11 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:53.166 12:59:11 -- common/autotest_common.sh@640 -- # local es=0 00:13:53.166 12:59:11 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 114113 00:13:53.166 12:59:11 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:53.166 12:59:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:53.166 12:59:11 -- common/autotest_common.sh@632 -- # type -t wait 00:13:53.166 12:59:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:53.166 12:59:11 -- common/autotest_common.sh@643 -- # wait 114113 00:13:53.427 Running I/O for 5 seconds... 00:13:53.427 task offset: 172536 on job bdev=EE_Dev_1 fails 00:13:53.427 00:13:53.427 Latency(us) 00:13:53.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.427 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:53.427 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:53.427 EE_Dev_1 : 0.00 35483.87 138.61 8064.52 0.00 301.53 124.74 547.37 00:13:53.427 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:53.427 Dev_2 : 0.00 23756.50 92.80 0.00 0.00 477.59 104.73 882.50 00:13:53.427 =================================================================================================================== 00:13:53.427 Total : 59240.37 231.41 8064.52 0.00 397.02 104.73 882.50 00:13:53.427 request: 00:13:53.427 { 00:13:53.427 "method": "perform_tests", 00:13:53.427 "req_id": 1 00:13:53.427 } 00:13:53.427 Got JSON-RPC error response 00:13:53.427 response: 00:13:53.427 { 00:13:53.427 "code": -32603, 00:13:53.427 "message": "bdevperf failed with error Operation not permitted" 00:13:53.427 } 00:13:53.427 [2024-06-11 12:59:12.075507] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:55.328 12:59:13 -- common/autotest_common.sh@643 -- # es=255 00:13:55.328 12:59:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:55.328 12:59:13 -- common/autotest_common.sh@652 -- # es=127 00:13:55.328 12:59:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:13:55.328 12:59:13 -- common/autotest_common.sh@660 -- # es=1 00:13:55.328 12:59:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:55.328 00:13:55.328 real 0m11.761s 00:13:55.328 user 0m11.665s 00:13:55.328 sys 0m0.921s 00:13:55.328 12:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.328 ************************************ 00:13:55.328 END TEST bdev_error 00:13:55.328 ************************************ 00:13:55.328 12:59:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.328 12:59:13 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:55.328 12:59:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:55.328 12:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:55.328 12:59:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.328 ************************************ 00:13:55.328 START TEST bdev_stat 00:13:55.328 ************************************ 00:13:55.328 12:59:13 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:13:55.328 12:59:13 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:55.328 12:59:13 -- bdev/blockdev.sh@594 -- # STAT_PID=114177 00:13:55.328 Process Bdev IO statistics testing pid: 114177 00:13:55.328 12:59:13 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 114177' 00:13:55.328 12:59:13 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:55.328 12:59:13 -- bdev/blockdev.sh@597 -- # waitforlisten 114177 00:13:55.328 12:59:13 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:55.328 12:59:13 -- common/autotest_common.sh@819 -- # '[' -z 114177 ']' 00:13:55.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.328 12:59:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.328 12:59:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:55.328 12:59:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.328 12:59:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:55.328 12:59:13 -- common/autotest_common.sh@10 -- # set +x 00:13:55.328 [2024-06-11 12:59:13.796293] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:55.328 [2024-06-11 12:59:13.796476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114177 ] 00:13:55.328 [2024-06-11 12:59:13.971282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:55.586 [2024-06-11 12:59:14.209547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.586 [2024-06-11 12:59:14.209549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.153 12:59:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:56.153 12:59:14 -- common/autotest_common.sh@852 -- # return 0 00:13:56.153 12:59:14 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:56.153 12:59:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.153 12:59:14 -- common/autotest_common.sh@10 -- # set +x 00:13:56.153 Malloc_STAT 00:13:56.153 12:59:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.153 12:59:14 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:56.153 12:59:14 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:13:56.153 12:59:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:56.153 12:59:14 -- common/autotest_common.sh@889 -- # local i 00:13:56.153 12:59:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:56.153 12:59:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:56.153 12:59:14 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:56.153 12:59:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.153 12:59:14 -- common/autotest_common.sh@10 -- # set +x 00:13:56.153 12:59:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.153 12:59:14 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:56.153 12:59:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.153 12:59:14 -- common/autotest_common.sh@10 -- # set +x 00:13:56.153 [ 00:13:56.153 { 00:13:56.153 "name": "Malloc_STAT", 00:13:56.153 "aliases": [ 00:13:56.153 "c26f0039-d793-4b18-89ec-c0dc4480da4e" 00:13:56.153 ], 00:13:56.153 "product_name": "Malloc disk", 00:13:56.153 "block_size": 512, 00:13:56.153 "num_blocks": 262144, 00:13:56.153 "uuid": "c26f0039-d793-4b18-89ec-c0dc4480da4e", 00:13:56.153 "assigned_rate_limits": { 00:13:56.153 "rw_ios_per_sec": 0, 00:13:56.153 "rw_mbytes_per_sec": 0, 00:13:56.153 "r_mbytes_per_sec": 0, 00:13:56.153 "w_mbytes_per_sec": 0 00:13:56.153 }, 00:13:56.153 "claimed": false, 00:13:56.153 "zoned": false, 00:13:56.153 "supported_io_types": { 00:13:56.153 "read": true, 00:13:56.153 "write": true, 00:13:56.153 "unmap": true, 00:13:56.153 "write_zeroes": true, 00:13:56.153 "flush": true, 00:13:56.153 "reset": true, 00:13:56.153 "compare": false, 00:13:56.153 "compare_and_write": false, 00:13:56.153 "abort": true, 00:13:56.153 "nvme_admin": false, 00:13:56.153 "nvme_io": false 00:13:56.153 }, 00:13:56.153 "memory_domains": [ 00:13:56.153 { 00:13:56.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.153 "dma_device_type": 2 00:13:56.153 } 00:13:56.153 ], 00:13:56.153 "driver_specific": {} 00:13:56.153 } 00:13:56.153 ] 00:13:56.153 12:59:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.153 12:59:14 -- common/autotest_common.sh@895 -- # return 0 00:13:56.153 12:59:14 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:56.153 12:59:14 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:56.153 Running I/O for 10 seconds... 00:13:58.054 12:59:16 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:58.054 12:59:16 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:58.054 12:59:16 -- bdev/blockdev.sh@558 -- # local iostats 00:13:58.054 12:59:16 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:58.054 12:59:16 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:58.054 12:59:16 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:58.054 12:59:16 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:58.054 12:59:16 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:58.054 12:59:16 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:58.054 12:59:16 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:58.054 12:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.054 12:59:16 -- common/autotest_common.sh@10 -- # set +x 00:13:58.313 12:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.313 12:59:16 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:58.313 "tick_rate": 2200000000, 00:13:58.313 "ticks": 1757469497428, 00:13:58.313 "bdevs": [ 00:13:58.313 { 00:13:58.313 "name": "Malloc_STAT", 00:13:58.313 "bytes_read": 790663680, 00:13:58.313 "num_read_ops": 193027, 00:13:58.313 "bytes_written": 0, 00:13:58.313 "num_write_ops": 0, 00:13:58.313 "bytes_unmapped": 0, 00:13:58.313 "num_unmap_ops": 0, 00:13:58.313 "bytes_copied": 0, 00:13:58.313 "num_copy_ops": 0, 00:13:58.313 "read_latency_ticks": 2160664219697, 00:13:58.313 "max_read_latency_ticks": 33420758, 00:13:58.313 "min_read_latency_ticks": 403484, 00:13:58.313 "write_latency_ticks": 0, 00:13:58.313 "max_write_latency_ticks": 0, 00:13:58.313 "min_write_latency_ticks": 0, 00:13:58.313 "unmap_latency_ticks": 0, 00:13:58.313 "max_unmap_latency_ticks": 0, 00:13:58.313 "min_unmap_latency_ticks": 0, 00:13:58.313 "copy_latency_ticks": 0, 00:13:58.313 "max_copy_latency_ticks": 0, 00:13:58.313 "min_copy_latency_ticks": 0, 00:13:58.313 "io_error": {} 00:13:58.313 } 00:13:58.313 ] 00:13:58.313 }' 00:13:58.313 12:59:16 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:58.313 12:59:16 -- bdev/blockdev.sh@567 -- # io_count1=193027 00:13:58.313 12:59:16 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:58.313 12:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.313 12:59:16 -- common/autotest_common.sh@10 -- # set +x 00:13:58.313 12:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.313 12:59:16 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:58.313 "tick_rate": 2200000000, 00:13:58.313 "ticks": 1757605255026, 00:13:58.313 "name": "Malloc_STAT", 00:13:58.313 "channels": [ 00:13:58.313 { 00:13:58.313 "thread_id": 2, 00:13:58.313 "bytes_read": 392167424, 00:13:58.313 "num_read_ops": 95744, 00:13:58.313 "bytes_written": 0, 00:13:58.313 "num_write_ops": 0, 00:13:58.313 "bytes_unmapped": 0, 00:13:58.313 "num_unmap_ops": 0, 00:13:58.313 "bytes_copied": 0, 00:13:58.313 "num_copy_ops": 0, 00:13:58.313 "read_latency_ticks": 1113859353554, 00:13:58.313 "max_read_latency_ticks": 33420758, 00:13:58.313 "min_read_latency_ticks": 7522516, 00:13:58.313 "write_latency_ticks": 0, 00:13:58.313 "max_write_latency_ticks": 0, 00:13:58.313 "min_write_latency_ticks": 0, 00:13:58.313 "unmap_latency_ticks": 0, 00:13:58.313 "max_unmap_latency_ticks": 0, 00:13:58.313 "min_unmap_latency_ticks": 0, 00:13:58.313 "copy_latency_ticks": 0, 00:13:58.313 "max_copy_latency_ticks": 0, 00:13:58.313 "min_copy_latency_ticks": 0 00:13:58.313 }, 00:13:58.313 { 00:13:58.313 "thread_id": 3, 00:13:58.313 "bytes_read": 415236096, 00:13:58.313 "num_read_ops": 101376, 00:13:58.313 "bytes_written": 0, 00:13:58.313 "num_write_ops": 0, 00:13:58.313 "bytes_unmapped": 0, 00:13:58.313 "num_unmap_ops": 0, 00:13:58.313 "bytes_copied": 0, 00:13:58.313 "num_copy_ops": 0, 00:13:58.313 "read_latency_ticks": 1117870393550, 00:13:58.313 "max_read_latency_ticks": 18932800, 00:13:58.313 "min_read_latency_ticks": 7230032, 00:13:58.313 "write_latency_ticks": 0, 00:13:58.313 "max_write_latency_ticks": 0, 00:13:58.313 "min_write_latency_ticks": 0, 00:13:58.313 "unmap_latency_ticks": 0, 00:13:58.313 "max_unmap_latency_ticks": 0, 00:13:58.313 "min_unmap_latency_ticks": 0, 00:13:58.313 "copy_latency_ticks": 0, 00:13:58.313 "max_copy_latency_ticks": 0, 00:13:58.313 "min_copy_latency_ticks": 0 00:13:58.313 } 00:13:58.313 ] 00:13:58.313 }' 00:13:58.313 12:59:16 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:58.313 12:59:16 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=95744 00:13:58.313 12:59:16 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=95744 00:13:58.313 12:59:16 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:58.313 12:59:17 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=101376 00:13:58.313 12:59:17 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=197120 00:13:58.313 12:59:17 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:58.313 12:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.314 12:59:17 -- common/autotest_common.sh@10 -- # set +x 00:13:58.314 12:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.314 12:59:17 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:58.314 "tick_rate": 2200000000, 00:13:58.314 "ticks": 1757878490212, 00:13:58.314 "bdevs": [ 00:13:58.314 { 00:13:58.314 "name": "Malloc_STAT", 00:13:58.314 "bytes_read": 840995328, 00:13:58.314 "num_read_ops": 205315, 00:13:58.314 "bytes_written": 0, 00:13:58.314 "num_write_ops": 0, 00:13:58.314 "bytes_unmapped": 0, 00:13:58.314 "num_unmap_ops": 0, 00:13:58.314 "bytes_copied": 0, 00:13:58.314 "num_copy_ops": 0, 00:13:58.314 "read_latency_ticks": 2369596157363, 00:13:58.314 "max_read_latency_ticks": 33420758, 00:13:58.314 "min_read_latency_ticks": 403484, 00:13:58.314 "write_latency_ticks": 0, 00:13:58.314 "max_write_latency_ticks": 0, 00:13:58.314 "min_write_latency_ticks": 0, 00:13:58.314 "unmap_latency_ticks": 0, 00:13:58.314 "max_unmap_latency_ticks": 0, 00:13:58.314 "min_unmap_latency_ticks": 0, 00:13:58.314 "copy_latency_ticks": 0, 00:13:58.314 "max_copy_latency_ticks": 0, 00:13:58.314 "min_copy_latency_ticks": 0, 00:13:58.314 "io_error": {} 00:13:58.314 } 00:13:58.314 ] 00:13:58.314 }' 00:13:58.314 12:59:17 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:58.314 12:59:17 -- bdev/blockdev.sh@576 -- # io_count2=205315 00:13:58.314 12:59:17 -- bdev/blockdev.sh@581 -- # '[' 197120 -lt 193027 ']' 00:13:58.314 12:59:17 -- bdev/blockdev.sh@581 -- # '[' 197120 -gt 205315 ']' 00:13:58.314 12:59:17 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:58.314 12:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.314 12:59:17 -- common/autotest_common.sh@10 -- # set +x 00:13:58.573 00:13:58.573 Latency(us) 00:13:58.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.573 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:58.573 Malloc_STAT : 2.20 46565.52 181.90 0.00 0.00 5483.26 2249.08 15192.44 00:13:58.573 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:58.573 Malloc_STAT : 2.20 49587.09 193.70 0.00 0.00 5150.79 1861.82 8638.84 00:13:58.573 =================================================================================================================== 00:13:58.573 Total : 96152.61 375.60 0.00 0.00 5311.79 1861.82 15192.44 00:13:58.573 0 00:13:58.573 12:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.573 12:59:17 -- bdev/blockdev.sh@607 -- # killprocess 114177 00:13:58.573 12:59:17 -- common/autotest_common.sh@926 -- # '[' -z 114177 ']' 00:13:58.573 12:59:17 -- common/autotest_common.sh@930 -- # kill -0 114177 00:13:58.573 12:59:17 -- common/autotest_common.sh@931 -- # uname 00:13:58.573 12:59:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:58.573 12:59:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114177 00:13:58.573 12:59:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:58.573 12:59:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:58.573 12:59:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114177' 00:13:58.573 killing process with pid 114177 00:13:58.573 12:59:17 -- common/autotest_common.sh@945 -- # kill 114177 00:13:58.573 Received shutdown signal, test time was about 2.331508 seconds 00:13:58.573 00:13:58.573 Latency(us) 00:13:58.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.573 =================================================================================================================== 00:13:58.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.573 12:59:17 -- common/autotest_common.sh@950 -- # wait 114177 00:13:59.952 12:59:18 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:59.952 00:13:59.952 real 0m4.692s 00:13:59.952 user 0m8.803s 00:13:59.952 sys 0m0.429s 00:13:59.952 12:59:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.952 ************************************ 00:13:59.952 END TEST bdev_stat 00:13:59.952 ************************************ 00:13:59.952 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:59.952 12:59:18 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:59.952 12:59:18 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:59.952 12:59:18 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:59.952 12:59:18 -- bdev/blockdev.sh@809 -- # cleanup 00:13:59.952 12:59:18 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:59.952 12:59:18 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:59.952 ************************************ 00:13:59.952 END TEST blockdev_general 00:13:59.952 ************************************ 00:13:59.952 12:59:18 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:59.952 12:59:18 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:59.952 12:59:18 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:59.952 12:59:18 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:59.952 00:13:59.952 real 2m21.079s 00:13:59.952 user 5m49.712s 00:13:59.952 sys 0m20.922s 00:13:59.952 12:59:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.952 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:59.952 12:59:18 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:59.952 12:59:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:59.952 12:59:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.952 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:59.952 ************************************ 00:13:59.952 START TEST bdev_raid 00:13:59.952 ************************************ 00:13:59.952 12:59:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:59.952 * Looking for test storage... 00:13:59.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:59.952 12:59:18 -- bdev/nbd_common.sh@6 -- # set -e 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:59.952 12:59:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:59.952 12:59:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.952 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:59.952 ************************************ 00:13:59.952 START TEST raid_function_test_raid0 00:13:59.952 ************************************ 00:13:59.952 12:59:18 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:59.952 12:59:18 -- bdev/bdev_raid.sh@86 -- # raid_pid=114346 00:13:59.953 12:59:18 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:59.953 12:59:18 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114346' 00:13:59.953 Process raid pid: 114346 00:13:59.953 12:59:18 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114346 /var/tmp/spdk-raid.sock 00:13:59.953 12:59:18 -- common/autotest_common.sh@819 -- # '[' -z 114346 ']' 00:13:59.953 12:59:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:59.953 12:59:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:59.953 12:59:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:59.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:59.953 12:59:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:59.953 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:59.953 [2024-06-11 12:59:18.688184] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:59.953 [2024-06-11 12:59:18.688569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.212 [2024-06-11 12:59:18.853836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.212 [2024-06-11 12:59:19.021562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.471 [2024-06-11 12:59:19.192677] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.038 12:59:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:01.038 12:59:19 -- common/autotest_common.sh@852 -- # return 0 00:14:01.038 12:59:19 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:01.038 12:59:19 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:01.038 12:59:19 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:01.038 12:59:19 -- bdev/bdev_raid.sh@70 -- # cat 00:14:01.038 12:59:19 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:01.296 [2024-06-11 12:59:19.945581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:01.296 [2024-06-11 12:59:19.947666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:01.296 [2024-06-11 12:59:19.947880] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:01.296 [2024-06-11 12:59:19.948038] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:01.296 [2024-06-11 12:59:19.948220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:01.296 [2024-06-11 12:59:19.948580] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:01.296 [2024-06-11 12:59:19.948697] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:14:01.296 [2024-06-11 12:59:19.948965] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.296 Base_1 00:14:01.296 Base_2 00:14:01.296 12:59:19 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:01.296 12:59:19 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:01.296 12:59:19 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:01.554 12:59:20 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:01.554 12:59:20 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:01.554 12:59:20 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@12 -- # local i 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.554 12:59:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:01.554 [2024-06-11 12:59:20.385679] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:01.814 /dev/nbd0 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.814 12:59:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:01.814 12:59:20 -- common/autotest_common.sh@857 -- # local i 00:14:01.814 12:59:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:01.814 12:59:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:01.814 12:59:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:01.814 12:59:20 -- common/autotest_common.sh@861 -- # break 00:14:01.814 12:59:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:01.814 12:59:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:01.814 12:59:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.814 1+0 records in 00:14:01.814 1+0 records out 00:14:01.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060794 s, 6.7 MB/s 00:14:01.814 12:59:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.814 12:59:20 -- common/autotest_common.sh@874 -- # size=4096 00:14:01.814 12:59:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.814 12:59:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:01.814 12:59:20 -- common/autotest_common.sh@877 -- # return 0 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.814 12:59:20 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:01.814 { 00:14:01.814 "nbd_device": "/dev/nbd0", 00:14:01.814 "bdev_name": "raid" 00:14:01.814 } 00:14:01.814 ]' 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:01.814 { 00:14:01.814 "nbd_device": "/dev/nbd0", 00:14:01.814 "bdev_name": "raid" 00:14:01.814 } 00:14:01.814 ]' 00:14:01.814 12:59:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:02.073 12:59:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:02.073 12:59:20 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:02.073 12:59:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:02.073 12:59:20 -- bdev/nbd_common.sh@65 -- # count=1 00:14:02.073 12:59:20 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:02.073 4096+0 records in 00:14:02.073 4096+0 records out 00:14:02.073 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0171032 s, 123 MB/s 00:14:02.073 12:59:20 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:02.332 4096+0 records in 00:14:02.332 4096+0 records out 00:14:02.332 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.256694 s, 8.2 MB/s 00:14:02.332 12:59:20 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:02.332 12:59:20 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:02.332 128+0 records in 00:14:02.332 128+0 records out 00:14:02.332 65536 bytes (66 kB, 64 KiB) copied, 0.00062936 s, 104 MB/s 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:02.332 2035+0 records in 00:14:02.332 2035+0 records out 00:14:02.332 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00843065 s, 124 MB/s 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:02.332 456+0 records in 00:14:02.332 456+0 records out 00:14:02.332 233472 bytes (233 kB, 228 KiB) copied, 0.00209121 s, 112 MB/s 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:02.332 12:59:21 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:02.332 12:59:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:02.332 12:59:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:02.332 12:59:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.332 12:59:21 -- bdev/nbd_common.sh@51 -- # local i 00:14:02.332 12:59:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.332 12:59:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:02.591 [2024-06-11 12:59:21.318698] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@41 -- # break 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.591 12:59:21 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:02.591 12:59:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:02.850 12:59:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:02.850 12:59:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:02.850 12:59:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:03.109 12:59:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:03.109 12:59:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:03.109 12:59:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:03.109 12:59:21 -- bdev/nbd_common.sh@65 -- # true 00:14:03.109 12:59:21 -- bdev/nbd_common.sh@65 -- # count=0 00:14:03.109 12:59:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:03.109 12:59:21 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:03.109 12:59:21 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:03.109 12:59:21 -- bdev/bdev_raid.sh@111 -- # killprocess 114346 00:14:03.109 12:59:21 -- common/autotest_common.sh@926 -- # '[' -z 114346 ']' 00:14:03.109 12:59:21 -- common/autotest_common.sh@930 -- # kill -0 114346 00:14:03.109 12:59:21 -- common/autotest_common.sh@931 -- # uname 00:14:03.109 12:59:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:03.109 12:59:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114346 00:14:03.109 killing process with pid 114346 00:14:03.109 12:59:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:03.109 12:59:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:03.109 12:59:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114346' 00:14:03.109 12:59:21 -- common/autotest_common.sh@945 -- # kill 114346 00:14:03.109 12:59:21 -- common/autotest_common.sh@950 -- # wait 114346 00:14:03.109 [2024-06-11 12:59:21.764574] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.109 [2024-06-11 12:59:21.764705] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.109 [2024-06-11 12:59:21.764769] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.109 [2024-06-11 12:59:21.764788] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:14:03.109 [2024-06-11 12:59:21.902459] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.484 ************************************ 00:14:04.484 END TEST raid_function_test_raid0 00:14:04.484 ************************************ 00:14:04.484 12:59:22 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:04.484 00:14:04.484 real 0m4.282s 00:14:04.484 user 0m5.547s 00:14:04.484 sys 0m0.810s 00:14:04.484 12:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.484 12:59:22 -- common/autotest_common.sh@10 -- # set +x 00:14:04.485 12:59:22 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:04.485 12:59:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:04.485 12:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:04.485 12:59:22 -- common/autotest_common.sh@10 -- # set +x 00:14:04.485 ************************************ 00:14:04.485 START TEST raid_function_test_concat 00:14:04.485 ************************************ 00:14:04.485 12:59:22 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:14:04.485 12:59:22 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:04.485 12:59:22 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:04.485 12:59:22 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:04.485 12:59:22 -- bdev/bdev_raid.sh@86 -- # raid_pid=114501 00:14:04.485 Process raid pid: 114501 00:14:04.485 12:59:22 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114501' 00:14:04.485 12:59:22 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114501 /var/tmp/spdk-raid.sock 00:14:04.485 12:59:22 -- common/autotest_common.sh@819 -- # '[' -z 114501 ']' 00:14:04.485 12:59:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.485 12:59:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.485 12:59:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.485 12:59:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.485 12:59:22 -- common/autotest_common.sh@10 -- # set +x 00:14:04.485 12:59:22 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:04.485 [2024-06-11 12:59:23.024141] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:04.485 [2024-06-11 12:59:23.024532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.485 [2024-06-11 12:59:23.184790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.744 [2024-06-11 12:59:23.371941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.744 [2024-06-11 12:59:23.550641] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.313 12:59:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:05.313 12:59:23 -- common/autotest_common.sh@852 -- # return 0 00:14:05.313 12:59:23 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:05.313 12:59:23 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:05.313 12:59:23 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:05.313 12:59:23 -- bdev/bdev_raid.sh@70 -- # cat 00:14:05.313 12:59:23 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:05.572 [2024-06-11 12:59:24.197217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:05.572 [2024-06-11 12:59:24.199084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:05.572 [2024-06-11 12:59:24.199163] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:05.572 [2024-06-11 12:59:24.199175] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:05.572 [2024-06-11 12:59:24.199304] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:05.572 [2024-06-11 12:59:24.199765] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:05.572 [2024-06-11 12:59:24.199791] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:14:05.572 [2024-06-11 12:59:24.199961] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.572 Base_1 00:14:05.572 Base_2 00:14:05.572 12:59:24 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:05.572 12:59:24 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:05.572 12:59:24 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:05.831 12:59:24 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:05.831 12:59:24 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:05.831 12:59:24 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@12 -- # local i 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.831 12:59:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:05.831 [2024-06-11 12:59:24.641369] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:06.089 /dev/nbd0 00:14:06.089 12:59:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.089 12:59:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.089 12:59:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:06.089 12:59:24 -- common/autotest_common.sh@857 -- # local i 00:14:06.089 12:59:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:06.089 12:59:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:06.089 12:59:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:06.089 12:59:24 -- common/autotest_common.sh@861 -- # break 00:14:06.089 12:59:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:06.089 12:59:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:06.090 12:59:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.090 1+0 records in 00:14:06.090 1+0 records out 00:14:06.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530549 s, 7.7 MB/s 00:14:06.090 12:59:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.090 12:59:24 -- common/autotest_common.sh@874 -- # size=4096 00:14:06.090 12:59:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.090 12:59:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:06.090 12:59:24 -- common/autotest_common.sh@877 -- # return 0 00:14:06.090 12:59:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.090 12:59:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.090 12:59:24 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:06.090 12:59:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.090 12:59:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:06.348 12:59:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:06.348 { 00:14:06.348 "nbd_device": "/dev/nbd0", 00:14:06.348 "bdev_name": "raid" 00:14:06.348 } 00:14:06.348 ]' 00:14:06.348 12:59:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:06.348 { 00:14:06.348 "nbd_device": "/dev/nbd0", 00:14:06.348 "bdev_name": "raid" 00:14:06.348 } 00:14:06.348 ]' 00:14:06.348 12:59:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:06.348 12:59:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:06.348 12:59:24 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:06.348 12:59:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:06.348 12:59:24 -- bdev/nbd_common.sh@65 -- # count=1 00:14:06.348 12:59:24 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:06.348 12:59:24 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:06.348 4096+0 records in 00:14:06.348 4096+0 records out 00:14:06.348 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0267215 s, 78.5 MB/s 00:14:06.348 12:59:25 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:06.607 4096+0 records in 00:14:06.607 4096+0 records out 00:14:06.607 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.246922 s, 8.5 MB/s 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:06.607 128+0 records in 00:14:06.607 128+0 records out 00:14:06.607 65536 bytes (66 kB, 64 KiB) copied, 0.000653012 s, 100 MB/s 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:06.607 2035+0 records in 00:14:06.607 2035+0 records out 00:14:06.607 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00740391 s, 141 MB/s 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:06.607 456+0 records in 00:14:06.607 456+0 records out 00:14:06.607 233472 bytes (233 kB, 228 KiB) copied, 0.00209059 s, 112 MB/s 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:06.607 12:59:25 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:06.607 12:59:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.607 12:59:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:06.607 12:59:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.607 12:59:25 -- bdev/nbd_common.sh@51 -- # local i 00:14:06.607 12:59:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.607 12:59:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:06.865 12:59:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:06.866 [2024-06-11 12:59:25.640099] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@41 -- # break 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.866 12:59:25 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.866 12:59:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@65 -- # true 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@65 -- # count=0 00:14:07.125 12:59:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:07.125 12:59:25 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:07.125 12:59:25 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:07.125 12:59:25 -- bdev/bdev_raid.sh@111 -- # killprocess 114501 00:14:07.125 12:59:25 -- common/autotest_common.sh@926 -- # '[' -z 114501 ']' 00:14:07.125 12:59:25 -- common/autotest_common.sh@930 -- # kill -0 114501 00:14:07.125 12:59:25 -- common/autotest_common.sh@931 -- # uname 00:14:07.125 12:59:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:07.125 12:59:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114501 00:14:07.125 killing process with pid 114501 00:14:07.125 12:59:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:07.125 12:59:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:07.125 12:59:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114501' 00:14:07.125 12:59:25 -- common/autotest_common.sh@945 -- # kill 114501 00:14:07.125 12:59:25 -- common/autotest_common.sh@950 -- # wait 114501 00:14:07.125 [2024-06-11 12:59:25.929904] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.125 [2024-06-11 12:59:25.930005] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.125 [2024-06-11 12:59:25.930095] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.125 [2024-06-11 12:59:25.930116] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:14:07.384 [2024-06-11 12:59:26.117258] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.318 ************************************ 00:14:08.318 END TEST raid_function_test_concat 00:14:08.318 ************************************ 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:08.318 00:14:08.318 real 0m4.112s 00:14:08.318 user 0m5.260s 00:14:08.318 sys 0m0.841s 00:14:08.318 12:59:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.318 12:59:27 -- common/autotest_common.sh@10 -- # set +x 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:08.318 12:59:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:08.318 12:59:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:08.318 12:59:27 -- common/autotest_common.sh@10 -- # set +x 00:14:08.318 ************************************ 00:14:08.318 START TEST raid0_resize_test 00:14:08.318 ************************************ 00:14:08.318 Process raid pid: 114671 00:14:08.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:08.318 12:59:27 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@301 -- # raid_pid=114671 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 114671' 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@303 -- # waitforlisten 114671 /var/tmp/spdk-raid.sock 00:14:08.318 12:59:27 -- common/autotest_common.sh@819 -- # '[' -z 114671 ']' 00:14:08.318 12:59:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:08.318 12:59:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:08.318 12:59:27 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:08.318 12:59:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:08.318 12:59:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:08.318 12:59:27 -- common/autotest_common.sh@10 -- # set +x 00:14:08.576 [2024-06-11 12:59:27.190598] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:08.576 [2024-06-11 12:59:27.190819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.576 [2024-06-11 12:59:27.359222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.835 [2024-06-11 12:59:27.552812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.094 [2024-06-11 12:59:27.730459] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.352 12:59:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:09.352 12:59:28 -- common/autotest_common.sh@852 -- # return 0 00:14:09.352 12:59:28 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:09.610 Base_1 00:14:09.610 12:59:28 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:09.869 Base_2 00:14:09.869 12:59:28 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:10.127 [2024-06-11 12:59:28.733532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:10.127 [2024-06-11 12:59:28.735246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:10.127 [2024-06-11 12:59:28.735332] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:10.127 [2024-06-11 12:59:28.735345] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:10.127 [2024-06-11 12:59:28.735550] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:14:10.127 [2024-06-11 12:59:28.735928] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:10.127 [2024-06-11 12:59:28.735966] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:14:10.127 [2024-06-11 12:59:28.736161] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.127 12:59:28 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:10.127 [2024-06-11 12:59:28.937538] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:10.127 [2024-06-11 12:59:28.937564] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:10.127 true 00:14:10.127 12:59:28 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:10.127 12:59:28 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:10.386 [2024-06-11 12:59:29.137671] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.386 12:59:29 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:10.386 12:59:29 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:10.386 12:59:29 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:10.386 12:59:29 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:10.644 [2024-06-11 12:59:29.357603] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:10.644 [2024-06-11 12:59:29.357629] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:10.644 [2024-06-11 12:59:29.357683] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:10.644 [2024-06-11 12:59:29.357737] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:10.644 true 00:14:10.644 12:59:29 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:10.644 12:59:29 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:10.902 [2024-06-11 12:59:29.557768] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.902 12:59:29 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:10.902 12:59:29 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:10.902 12:59:29 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:10.902 12:59:29 -- bdev/bdev_raid.sh@332 -- # killprocess 114671 00:14:10.902 12:59:29 -- common/autotest_common.sh@926 -- # '[' -z 114671 ']' 00:14:10.902 12:59:29 -- common/autotest_common.sh@930 -- # kill -0 114671 00:14:10.902 12:59:29 -- common/autotest_common.sh@931 -- # uname 00:14:10.902 12:59:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:10.902 12:59:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114671 00:14:10.902 killing process with pid 114671 00:14:10.902 12:59:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:10.902 12:59:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:10.902 12:59:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114671' 00:14:10.902 12:59:29 -- common/autotest_common.sh@945 -- # kill 114671 00:14:10.902 12:59:29 -- common/autotest_common.sh@950 -- # wait 114671 00:14:10.902 [2024-06-11 12:59:29.590580] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.902 [2024-06-11 12:59:29.590656] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.902 [2024-06-11 12:59:29.590703] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.902 [2024-06-11 12:59:29.590712] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:14:10.902 [2024-06-11 12:59:29.591295] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.838 ************************************ 00:14:11.838 END TEST raid0_resize_test 00:14:11.838 ************************************ 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:11.838 00:14:11.838 real 0m3.454s 00:14:11.838 user 0m4.892s 00:14:11.838 sys 0m0.493s 00:14:11.838 12:59:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.838 12:59:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:11.838 12:59:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:11.838 12:59:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:11.838 12:59:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.838 ************************************ 00:14:11.838 START TEST raid_state_function_test 00:14:11.838 ************************************ 00:14:11.838 12:59:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=114753 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114753' 00:14:11.838 Process raid pid: 114753 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:11.838 12:59:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114753 /var/tmp/spdk-raid.sock 00:14:11.838 12:59:30 -- common/autotest_common.sh@819 -- # '[' -z 114753 ']' 00:14:11.838 12:59:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:11.838 12:59:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:11.838 12:59:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:11.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:11.838 12:59:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:11.838 12:59:30 -- common/autotest_common.sh@10 -- # set +x 00:14:12.097 [2024-06-11 12:59:30.708156] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:12.097 [2024-06-11 12:59:30.708353] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.097 [2024-06-11 12:59:30.873699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.355 [2024-06-11 12:59:31.061032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.613 [2024-06-11 12:59:31.243955] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.871 12:59:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.871 12:59:31 -- common/autotest_common.sh@852 -- # return 0 00:14:12.871 12:59:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:13.129 [2024-06-11 12:59:31.833027] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.129 [2024-06-11 12:59:31.833139] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.129 [2024-06-11 12:59:31.833161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.129 [2024-06-11 12:59:31.833192] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.129 12:59:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.395 12:59:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.395 "name": "Existed_Raid", 00:14:13.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.395 "strip_size_kb": 64, 00:14:13.395 "state": "configuring", 00:14:13.395 "raid_level": "raid0", 00:14:13.395 "superblock": false, 00:14:13.395 "num_base_bdevs": 2, 00:14:13.395 "num_base_bdevs_discovered": 0, 00:14:13.395 "num_base_bdevs_operational": 2, 00:14:13.395 "base_bdevs_list": [ 00:14:13.395 { 00:14:13.395 "name": "BaseBdev1", 00:14:13.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.395 "is_configured": false, 00:14:13.395 "data_offset": 0, 00:14:13.395 "data_size": 0 00:14:13.395 }, 00:14:13.395 { 00:14:13.395 "name": "BaseBdev2", 00:14:13.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.395 "is_configured": false, 00:14:13.395 "data_offset": 0, 00:14:13.395 "data_size": 0 00:14:13.395 } 00:14:13.395 ] 00:14:13.395 }' 00:14:13.395 12:59:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.395 12:59:32 -- common/autotest_common.sh@10 -- # set +x 00:14:13.962 12:59:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:14.220 [2024-06-11 12:59:32.933031] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.220 [2024-06-11 12:59:32.933090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:14.221 12:59:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:14.478 [2024-06-11 12:59:33.133108] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.478 [2024-06-11 12:59:33.133200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.478 [2024-06-11 12:59:33.133215] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.478 [2024-06-11 12:59:33.133241] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.478 12:59:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:14.735 [2024-06-11 12:59:33.367994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.735 BaseBdev1 00:14:14.735 12:59:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:14.736 12:59:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:14.736 12:59:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:14.736 12:59:33 -- common/autotest_common.sh@889 -- # local i 00:14:14.736 12:59:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:14.736 12:59:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:14.736 12:59:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.736 12:59:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:14.994 [ 00:14:14.994 { 00:14:14.994 "name": "BaseBdev1", 00:14:14.994 "aliases": [ 00:14:14.994 "c3d74d01-ff64-4a23-a691-c319c2a6a4ba" 00:14:14.994 ], 00:14:14.994 "product_name": "Malloc disk", 00:14:14.994 "block_size": 512, 00:14:14.994 "num_blocks": 65536, 00:14:14.994 "uuid": "c3d74d01-ff64-4a23-a691-c319c2a6a4ba", 00:14:14.994 "assigned_rate_limits": { 00:14:14.994 "rw_ios_per_sec": 0, 00:14:14.994 "rw_mbytes_per_sec": 0, 00:14:14.994 "r_mbytes_per_sec": 0, 00:14:14.994 "w_mbytes_per_sec": 0 00:14:14.994 }, 00:14:14.994 "claimed": true, 00:14:14.994 "claim_type": "exclusive_write", 00:14:14.994 "zoned": false, 00:14:14.994 "supported_io_types": { 00:14:14.994 "read": true, 00:14:14.994 "write": true, 00:14:14.994 "unmap": true, 00:14:14.994 "write_zeroes": true, 00:14:14.994 "flush": true, 00:14:14.994 "reset": true, 00:14:14.994 "compare": false, 00:14:14.994 "compare_and_write": false, 00:14:14.994 "abort": true, 00:14:14.994 "nvme_admin": false, 00:14:14.994 "nvme_io": false 00:14:14.994 }, 00:14:14.994 "memory_domains": [ 00:14:14.994 { 00:14:14.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.994 "dma_device_type": 2 00:14:14.994 } 00:14:14.994 ], 00:14:14.994 "driver_specific": {} 00:14:14.994 } 00:14:14.994 ] 00:14:14.994 12:59:33 -- common/autotest_common.sh@895 -- # return 0 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.994 12:59:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.252 12:59:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.252 "name": "Existed_Raid", 00:14:15.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.252 "strip_size_kb": 64, 00:14:15.252 "state": "configuring", 00:14:15.252 "raid_level": "raid0", 00:14:15.252 "superblock": false, 00:14:15.252 "num_base_bdevs": 2, 00:14:15.252 "num_base_bdevs_discovered": 1, 00:14:15.252 "num_base_bdevs_operational": 2, 00:14:15.252 "base_bdevs_list": [ 00:14:15.252 { 00:14:15.252 "name": "BaseBdev1", 00:14:15.252 "uuid": "c3d74d01-ff64-4a23-a691-c319c2a6a4ba", 00:14:15.252 "is_configured": true, 00:14:15.252 "data_offset": 0, 00:14:15.252 "data_size": 65536 00:14:15.252 }, 00:14:15.252 { 00:14:15.252 "name": "BaseBdev2", 00:14:15.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.252 "is_configured": false, 00:14:15.252 "data_offset": 0, 00:14:15.252 "data_size": 0 00:14:15.252 } 00:14:15.252 ] 00:14:15.252 }' 00:14:15.252 12:59:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.252 12:59:34 -- common/autotest_common.sh@10 -- # set +x 00:14:15.845 12:59:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:16.103 [2024-06-11 12:59:34.860288] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.103 [2024-06-11 12:59:34.860333] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:16.103 12:59:34 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:16.103 12:59:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:16.361 [2024-06-11 12:59:35.044381] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.361 [2024-06-11 12:59:35.046566] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.361 [2024-06-11 12:59:35.046636] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.361 12:59:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.619 12:59:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:16.619 "name": "Existed_Raid", 00:14:16.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.619 "strip_size_kb": 64, 00:14:16.619 "state": "configuring", 00:14:16.619 "raid_level": "raid0", 00:14:16.619 "superblock": false, 00:14:16.619 "num_base_bdevs": 2, 00:14:16.619 "num_base_bdevs_discovered": 1, 00:14:16.619 "num_base_bdevs_operational": 2, 00:14:16.619 "base_bdevs_list": [ 00:14:16.619 { 00:14:16.619 "name": "BaseBdev1", 00:14:16.619 "uuid": "c3d74d01-ff64-4a23-a691-c319c2a6a4ba", 00:14:16.619 "is_configured": true, 00:14:16.619 "data_offset": 0, 00:14:16.619 "data_size": 65536 00:14:16.619 }, 00:14:16.619 { 00:14:16.619 "name": "BaseBdev2", 00:14:16.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.619 "is_configured": false, 00:14:16.619 "data_offset": 0, 00:14:16.619 "data_size": 0 00:14:16.619 } 00:14:16.619 ] 00:14:16.619 }' 00:14:16.619 12:59:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:16.619 12:59:35 -- common/autotest_common.sh@10 -- # set +x 00:14:17.185 12:59:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:17.443 [2024-06-11 12:59:36.205246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.443 [2024-06-11 12:59:36.205310] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:17.443 [2024-06-11 12:59:36.205333] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:17.443 [2024-06-11 12:59:36.205503] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:17.443 [2024-06-11 12:59:36.205897] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:17.443 [2024-06-11 12:59:36.205931] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:17.443 [2024-06-11 12:59:36.206255] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.443 BaseBdev2 00:14:17.443 12:59:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:17.443 12:59:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:17.443 12:59:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:17.443 12:59:36 -- common/autotest_common.sh@889 -- # local i 00:14:17.443 12:59:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:17.443 12:59:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:17.443 12:59:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.702 12:59:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:17.960 [ 00:14:17.960 { 00:14:17.960 "name": "BaseBdev2", 00:14:17.960 "aliases": [ 00:14:17.960 "d548a0f7-ddce-4afa-bde0-1882e7807ca3" 00:14:17.960 ], 00:14:17.960 "product_name": "Malloc disk", 00:14:17.960 "block_size": 512, 00:14:17.960 "num_blocks": 65536, 00:14:17.960 "uuid": "d548a0f7-ddce-4afa-bde0-1882e7807ca3", 00:14:17.960 "assigned_rate_limits": { 00:14:17.960 "rw_ios_per_sec": 0, 00:14:17.960 "rw_mbytes_per_sec": 0, 00:14:17.960 "r_mbytes_per_sec": 0, 00:14:17.960 "w_mbytes_per_sec": 0 00:14:17.960 }, 00:14:17.960 "claimed": true, 00:14:17.960 "claim_type": "exclusive_write", 00:14:17.960 "zoned": false, 00:14:17.960 "supported_io_types": { 00:14:17.960 "read": true, 00:14:17.960 "write": true, 00:14:17.960 "unmap": true, 00:14:17.960 "write_zeroes": true, 00:14:17.960 "flush": true, 00:14:17.960 "reset": true, 00:14:17.960 "compare": false, 00:14:17.960 "compare_and_write": false, 00:14:17.960 "abort": true, 00:14:17.960 "nvme_admin": false, 00:14:17.960 "nvme_io": false 00:14:17.960 }, 00:14:17.960 "memory_domains": [ 00:14:17.960 { 00:14:17.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.960 "dma_device_type": 2 00:14:17.960 } 00:14:17.960 ], 00:14:17.960 "driver_specific": {} 00:14:17.960 } 00:14:17.960 ] 00:14:17.960 12:59:36 -- common/autotest_common.sh@895 -- # return 0 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.960 12:59:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.219 12:59:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.219 "name": "Existed_Raid", 00:14:18.219 "uuid": "413c6c81-e7bd-4dba-807f-457037145f3c", 00:14:18.219 "strip_size_kb": 64, 00:14:18.219 "state": "online", 00:14:18.219 "raid_level": "raid0", 00:14:18.219 "superblock": false, 00:14:18.219 "num_base_bdevs": 2, 00:14:18.219 "num_base_bdevs_discovered": 2, 00:14:18.219 "num_base_bdevs_operational": 2, 00:14:18.219 "base_bdevs_list": [ 00:14:18.219 { 00:14:18.219 "name": "BaseBdev1", 00:14:18.219 "uuid": "c3d74d01-ff64-4a23-a691-c319c2a6a4ba", 00:14:18.219 "is_configured": true, 00:14:18.219 "data_offset": 0, 00:14:18.219 "data_size": 65536 00:14:18.219 }, 00:14:18.219 { 00:14:18.219 "name": "BaseBdev2", 00:14:18.219 "uuid": "d548a0f7-ddce-4afa-bde0-1882e7807ca3", 00:14:18.219 "is_configured": true, 00:14:18.219 "data_offset": 0, 00:14:18.219 "data_size": 65536 00:14:18.219 } 00:14:18.219 ] 00:14:18.219 }' 00:14:18.219 12:59:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.219 12:59:36 -- common/autotest_common.sh@10 -- # set +x 00:14:18.786 12:59:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.044 [2024-06-11 12:59:37.817673] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.045 [2024-06-11 12:59:37.817710] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.045 [2024-06-11 12:59:37.817789] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.303 12:59:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.562 12:59:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.562 "name": "Existed_Raid", 00:14:19.562 "uuid": "413c6c81-e7bd-4dba-807f-457037145f3c", 00:14:19.562 "strip_size_kb": 64, 00:14:19.562 "state": "offline", 00:14:19.562 "raid_level": "raid0", 00:14:19.562 "superblock": false, 00:14:19.562 "num_base_bdevs": 2, 00:14:19.562 "num_base_bdevs_discovered": 1, 00:14:19.562 "num_base_bdevs_operational": 1, 00:14:19.562 "base_bdevs_list": [ 00:14:19.562 { 00:14:19.562 "name": null, 00:14:19.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.562 "is_configured": false, 00:14:19.562 "data_offset": 0, 00:14:19.562 "data_size": 65536 00:14:19.562 }, 00:14:19.562 { 00:14:19.562 "name": "BaseBdev2", 00:14:19.562 "uuid": "d548a0f7-ddce-4afa-bde0-1882e7807ca3", 00:14:19.562 "is_configured": true, 00:14:19.562 "data_offset": 0, 00:14:19.562 "data_size": 65536 00:14:19.562 } 00:14:19.562 ] 00:14:19.562 }' 00:14:19.562 12:59:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.562 12:59:38 -- common/autotest_common.sh@10 -- # set +x 00:14:20.129 12:59:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:20.129 12:59:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:20.129 12:59:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.129 12:59:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:20.388 12:59:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:20.388 12:59:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.388 12:59:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:20.647 [2024-06-11 12:59:39.253242] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.647 [2024-06-11 12:59:39.253312] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:20.647 12:59:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:20.647 12:59:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:20.647 12:59:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.647 12:59:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:20.904 12:59:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:20.905 12:59:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:20.905 12:59:39 -- bdev/bdev_raid.sh@287 -- # killprocess 114753 00:14:20.905 12:59:39 -- common/autotest_common.sh@926 -- # '[' -z 114753 ']' 00:14:20.905 12:59:39 -- common/autotest_common.sh@930 -- # kill -0 114753 00:14:20.905 12:59:39 -- common/autotest_common.sh@931 -- # uname 00:14:20.905 12:59:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:20.905 12:59:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114753 00:14:20.905 killing process with pid 114753 00:14:20.905 12:59:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:20.905 12:59:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:20.905 12:59:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114753' 00:14:20.905 12:59:39 -- common/autotest_common.sh@945 -- # kill 114753 00:14:20.905 12:59:39 -- common/autotest_common.sh@950 -- # wait 114753 00:14:20.905 [2024-06-11 12:59:39.595377] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.905 [2024-06-11 12:59:39.595511] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.839 ************************************ 00:14:21.839 END TEST raid_state_function_test 00:14:21.839 ************************************ 00:14:21.839 12:59:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:21.839 00:14:21.839 real 0m9.988s 00:14:21.839 user 0m17.488s 00:14:21.839 sys 0m1.159s 00:14:21.839 12:59:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.839 12:59:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.839 12:59:40 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:21.839 12:59:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:21.839 12:59:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:21.839 12:59:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.098 ************************************ 00:14:22.098 START TEST raid_state_function_test_sb 00:14:22.098 ************************************ 00:14:22.098 12:59:40 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=115096 00:14:22.098 Process raid pid: 115096 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115096' 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115096 /var/tmp/spdk-raid.sock 00:14:22.098 12:59:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:22.098 12:59:40 -- common/autotest_common.sh@819 -- # '[' -z 115096 ']' 00:14:22.098 12:59:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:22.098 12:59:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:22.099 12:59:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:22.099 12:59:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:22.099 12:59:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.099 [2024-06-11 12:59:40.746149] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:22.099 [2024-06-11 12:59:40.746337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.099 [2024-06-11 12:59:40.913089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.358 [2024-06-11 12:59:41.133946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.617 [2024-06-11 12:59:41.316757] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.876 12:59:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.876 12:59:41 -- common/autotest_common.sh@852 -- # return 0 00:14:22.876 12:59:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:23.135 [2024-06-11 12:59:41.809014] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.135 [2024-06-11 12:59:41.809112] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.135 [2024-06-11 12:59:41.809141] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.135 [2024-06-11 12:59:41.809160] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.135 12:59:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.393 12:59:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:23.393 "name": "Existed_Raid", 00:14:23.393 "uuid": "c8e83653-0a6c-4c3a-85b1-d7be47d23606", 00:14:23.393 "strip_size_kb": 64, 00:14:23.393 "state": "configuring", 00:14:23.393 "raid_level": "raid0", 00:14:23.393 "superblock": true, 00:14:23.393 "num_base_bdevs": 2, 00:14:23.393 "num_base_bdevs_discovered": 0, 00:14:23.393 "num_base_bdevs_operational": 2, 00:14:23.393 "base_bdevs_list": [ 00:14:23.393 { 00:14:23.393 "name": "BaseBdev1", 00:14:23.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.393 "is_configured": false, 00:14:23.393 "data_offset": 0, 00:14:23.393 "data_size": 0 00:14:23.393 }, 00:14:23.393 { 00:14:23.393 "name": "BaseBdev2", 00:14:23.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.393 "is_configured": false, 00:14:23.393 "data_offset": 0, 00:14:23.393 "data_size": 0 00:14:23.393 } 00:14:23.393 ] 00:14:23.393 }' 00:14:23.393 12:59:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:23.393 12:59:42 -- common/autotest_common.sh@10 -- # set +x 00:14:23.960 12:59:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:24.218 [2024-06-11 12:59:42.949098] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.218 [2024-06-11 12:59:42.949152] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:24.218 12:59:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:24.476 [2024-06-11 12:59:43.149207] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:24.476 [2024-06-11 12:59:43.149308] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:24.476 [2024-06-11 12:59:43.149337] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.476 [2024-06-11 12:59:43.149376] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.476 12:59:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:24.734 [2024-06-11 12:59:43.390028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.734 BaseBdev1 00:14:24.734 12:59:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:24.734 12:59:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:24.734 12:59:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:24.734 12:59:43 -- common/autotest_common.sh@889 -- # local i 00:14:24.734 12:59:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:24.734 12:59:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:24.734 12:59:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.993 12:59:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.993 [ 00:14:24.993 { 00:14:24.993 "name": "BaseBdev1", 00:14:24.993 "aliases": [ 00:14:24.993 "96910e66-943f-4057-958f-83dc1cf9cf4a" 00:14:24.993 ], 00:14:24.993 "product_name": "Malloc disk", 00:14:24.993 "block_size": 512, 00:14:24.993 "num_blocks": 65536, 00:14:24.993 "uuid": "96910e66-943f-4057-958f-83dc1cf9cf4a", 00:14:24.993 "assigned_rate_limits": { 00:14:24.993 "rw_ios_per_sec": 0, 00:14:24.993 "rw_mbytes_per_sec": 0, 00:14:24.993 "r_mbytes_per_sec": 0, 00:14:24.993 "w_mbytes_per_sec": 0 00:14:24.993 }, 00:14:24.993 "claimed": true, 00:14:24.993 "claim_type": "exclusive_write", 00:14:24.993 "zoned": false, 00:14:24.993 "supported_io_types": { 00:14:24.993 "read": true, 00:14:24.993 "write": true, 00:14:24.993 "unmap": true, 00:14:24.993 "write_zeroes": true, 00:14:24.993 "flush": true, 00:14:24.993 "reset": true, 00:14:24.993 "compare": false, 00:14:24.993 "compare_and_write": false, 00:14:24.993 "abort": true, 00:14:24.993 "nvme_admin": false, 00:14:24.993 "nvme_io": false 00:14:24.993 }, 00:14:24.993 "memory_domains": [ 00:14:24.993 { 00:14:24.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.993 "dma_device_type": 2 00:14:24.993 } 00:14:24.993 ], 00:14:24.993 "driver_specific": {} 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 12:59:43 -- common/autotest_common.sh@895 -- # return 0 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.993 12:59:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.251 12:59:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.251 "name": "Existed_Raid", 00:14:25.251 "uuid": "7b6b007c-dead-4735-813c-112088fed2bc", 00:14:25.251 "strip_size_kb": 64, 00:14:25.251 "state": "configuring", 00:14:25.251 "raid_level": "raid0", 00:14:25.251 "superblock": true, 00:14:25.251 "num_base_bdevs": 2, 00:14:25.251 "num_base_bdevs_discovered": 1, 00:14:25.251 "num_base_bdevs_operational": 2, 00:14:25.251 "base_bdevs_list": [ 00:14:25.251 { 00:14:25.251 "name": "BaseBdev1", 00:14:25.251 "uuid": "96910e66-943f-4057-958f-83dc1cf9cf4a", 00:14:25.251 "is_configured": true, 00:14:25.251 "data_offset": 2048, 00:14:25.251 "data_size": 63488 00:14:25.251 }, 00:14:25.251 { 00:14:25.251 "name": "BaseBdev2", 00:14:25.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.251 "is_configured": false, 00:14:25.251 "data_offset": 0, 00:14:25.251 "data_size": 0 00:14:25.252 } 00:14:25.252 ] 00:14:25.252 }' 00:14:25.252 12:59:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.252 12:59:44 -- common/autotest_common.sh@10 -- # set +x 00:14:26.187 12:59:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:26.187 [2024-06-11 12:59:44.882392] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.187 [2024-06-11 12:59:44.882443] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:26.187 12:59:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:26.187 12:59:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:26.459 12:59:45 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.733 BaseBdev1 00:14:26.733 12:59:45 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:26.733 12:59:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:26.733 12:59:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:26.733 12:59:45 -- common/autotest_common.sh@889 -- # local i 00:14:26.733 12:59:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:26.733 12:59:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:26.733 12:59:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.992 12:59:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.992 [ 00:14:26.992 { 00:14:26.992 "name": "BaseBdev1", 00:14:26.992 "aliases": [ 00:14:26.992 "0dfee36f-d215-4d7d-9424-09173f6b453b" 00:14:26.992 ], 00:14:26.992 "product_name": "Malloc disk", 00:14:26.992 "block_size": 512, 00:14:26.992 "num_blocks": 65536, 00:14:26.992 "uuid": "0dfee36f-d215-4d7d-9424-09173f6b453b", 00:14:26.992 "assigned_rate_limits": { 00:14:26.992 "rw_ios_per_sec": 0, 00:14:26.992 "rw_mbytes_per_sec": 0, 00:14:26.992 "r_mbytes_per_sec": 0, 00:14:26.992 "w_mbytes_per_sec": 0 00:14:26.992 }, 00:14:26.992 "claimed": false, 00:14:26.992 "zoned": false, 00:14:26.992 "supported_io_types": { 00:14:26.992 "read": true, 00:14:26.992 "write": true, 00:14:26.992 "unmap": true, 00:14:26.992 "write_zeroes": true, 00:14:26.992 "flush": true, 00:14:26.992 "reset": true, 00:14:26.992 "compare": false, 00:14:26.992 "compare_and_write": false, 00:14:26.992 "abort": true, 00:14:26.992 "nvme_admin": false, 00:14:26.992 "nvme_io": false 00:14:26.992 }, 00:14:26.992 "memory_domains": [ 00:14:26.992 { 00:14:26.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.992 "dma_device_type": 2 00:14:26.992 } 00:14:26.992 ], 00:14:26.992 "driver_specific": {} 00:14:26.992 } 00:14:26.992 ] 00:14:27.251 12:59:45 -- common/autotest_common.sh@895 -- # return 0 00:14:27.251 12:59:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:27.251 [2024-06-11 12:59:46.018050] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.251 [2024-06-11 12:59:46.019727] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.251 [2024-06-11 12:59:46.019796] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.251 12:59:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.510 12:59:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.510 "name": "Existed_Raid", 00:14:27.510 "uuid": "aba6c571-e885-4f85-a012-4a2e28bd67ba", 00:14:27.510 "strip_size_kb": 64, 00:14:27.510 "state": "configuring", 00:14:27.510 "raid_level": "raid0", 00:14:27.510 "superblock": true, 00:14:27.510 "num_base_bdevs": 2, 00:14:27.510 "num_base_bdevs_discovered": 1, 00:14:27.510 "num_base_bdevs_operational": 2, 00:14:27.510 "base_bdevs_list": [ 00:14:27.510 { 00:14:27.510 "name": "BaseBdev1", 00:14:27.510 "uuid": "0dfee36f-d215-4d7d-9424-09173f6b453b", 00:14:27.510 "is_configured": true, 00:14:27.510 "data_offset": 2048, 00:14:27.510 "data_size": 63488 00:14:27.510 }, 00:14:27.510 { 00:14:27.510 "name": "BaseBdev2", 00:14:27.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.510 "is_configured": false, 00:14:27.510 "data_offset": 0, 00:14:27.510 "data_size": 0 00:14:27.510 } 00:14:27.510 ] 00:14:27.510 }' 00:14:27.510 12:59:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.510 12:59:46 -- common/autotest_common.sh@10 -- # set +x 00:14:28.446 12:59:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:28.446 [2024-06-11 12:59:47.217938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.446 [2024-06-11 12:59:47.218191] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:14:28.447 [2024-06-11 12:59:47.218207] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:28.447 [2024-06-11 12:59:47.218378] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:28.447 BaseBdev2 00:14:28.447 [2024-06-11 12:59:47.218738] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:14:28.447 [2024-06-11 12:59:47.218763] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:14:28.447 [2024-06-11 12:59:47.218908] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.447 12:59:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:28.447 12:59:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:28.447 12:59:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:28.447 12:59:47 -- common/autotest_common.sh@889 -- # local i 00:14:28.447 12:59:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:28.447 12:59:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:28.447 12:59:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:28.705 12:59:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:28.964 [ 00:14:28.964 { 00:14:28.964 "name": "BaseBdev2", 00:14:28.964 "aliases": [ 00:14:28.964 "74349484-b6f9-43c7-9f60-0947d7bfb6c1" 00:14:28.964 ], 00:14:28.964 "product_name": "Malloc disk", 00:14:28.964 "block_size": 512, 00:14:28.964 "num_blocks": 65536, 00:14:28.964 "uuid": "74349484-b6f9-43c7-9f60-0947d7bfb6c1", 00:14:28.964 "assigned_rate_limits": { 00:14:28.964 "rw_ios_per_sec": 0, 00:14:28.964 "rw_mbytes_per_sec": 0, 00:14:28.964 "r_mbytes_per_sec": 0, 00:14:28.964 "w_mbytes_per_sec": 0 00:14:28.964 }, 00:14:28.964 "claimed": true, 00:14:28.964 "claim_type": "exclusive_write", 00:14:28.964 "zoned": false, 00:14:28.964 "supported_io_types": { 00:14:28.964 "read": true, 00:14:28.964 "write": true, 00:14:28.964 "unmap": true, 00:14:28.964 "write_zeroes": true, 00:14:28.964 "flush": true, 00:14:28.964 "reset": true, 00:14:28.964 "compare": false, 00:14:28.964 "compare_and_write": false, 00:14:28.964 "abort": true, 00:14:28.964 "nvme_admin": false, 00:14:28.964 "nvme_io": false 00:14:28.964 }, 00:14:28.964 "memory_domains": [ 00:14:28.964 { 00:14:28.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.964 "dma_device_type": 2 00:14:28.964 } 00:14:28.964 ], 00:14:28.964 "driver_specific": {} 00:14:28.964 } 00:14:28.964 ] 00:14:28.964 12:59:47 -- common/autotest_common.sh@895 -- # return 0 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.964 12:59:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.223 12:59:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.223 "name": "Existed_Raid", 00:14:29.223 "uuid": "aba6c571-e885-4f85-a012-4a2e28bd67ba", 00:14:29.223 "strip_size_kb": 64, 00:14:29.223 "state": "online", 00:14:29.223 "raid_level": "raid0", 00:14:29.223 "superblock": true, 00:14:29.223 "num_base_bdevs": 2, 00:14:29.223 "num_base_bdevs_discovered": 2, 00:14:29.223 "num_base_bdevs_operational": 2, 00:14:29.223 "base_bdevs_list": [ 00:14:29.223 { 00:14:29.223 "name": "BaseBdev1", 00:14:29.223 "uuid": "0dfee36f-d215-4d7d-9424-09173f6b453b", 00:14:29.223 "is_configured": true, 00:14:29.223 "data_offset": 2048, 00:14:29.223 "data_size": 63488 00:14:29.223 }, 00:14:29.223 { 00:14:29.223 "name": "BaseBdev2", 00:14:29.223 "uuid": "74349484-b6f9-43c7-9f60-0947d7bfb6c1", 00:14:29.223 "is_configured": true, 00:14:29.223 "data_offset": 2048, 00:14:29.223 "data_size": 63488 00:14:29.223 } 00:14:29.223 ] 00:14:29.223 }' 00:14:29.223 12:59:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.223 12:59:47 -- common/autotest_common.sh@10 -- # set +x 00:14:29.790 12:59:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:30.049 [2024-06-11 12:59:48.790379] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.049 [2024-06-11 12:59:48.790412] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.049 [2024-06-11 12:59:48.790466] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.049 12:59:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.308 12:59:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.308 "name": "Existed_Raid", 00:14:30.308 "uuid": "aba6c571-e885-4f85-a012-4a2e28bd67ba", 00:14:30.308 "strip_size_kb": 64, 00:14:30.308 "state": "offline", 00:14:30.308 "raid_level": "raid0", 00:14:30.308 "superblock": true, 00:14:30.308 "num_base_bdevs": 2, 00:14:30.308 "num_base_bdevs_discovered": 1, 00:14:30.308 "num_base_bdevs_operational": 1, 00:14:30.308 "base_bdevs_list": [ 00:14:30.308 { 00:14:30.308 "name": null, 00:14:30.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.308 "is_configured": false, 00:14:30.308 "data_offset": 2048, 00:14:30.308 "data_size": 63488 00:14:30.308 }, 00:14:30.308 { 00:14:30.308 "name": "BaseBdev2", 00:14:30.308 "uuid": "74349484-b6f9-43c7-9f60-0947d7bfb6c1", 00:14:30.308 "is_configured": true, 00:14:30.308 "data_offset": 2048, 00:14:30.308 "data_size": 63488 00:14:30.308 } 00:14:30.308 ] 00:14:30.308 }' 00:14:30.308 12:59:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.308 12:59:49 -- common/autotest_common.sh@10 -- # set +x 00:14:31.244 12:59:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:31.244 12:59:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:31.244 12:59:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.244 12:59:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:31.244 12:59:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:31.244 12:59:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:31.244 12:59:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:31.503 [2024-06-11 12:59:50.191376] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.503 [2024-06-11 12:59:50.191456] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:14:31.503 12:59:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:31.503 12:59:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:31.503 12:59:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.503 12:59:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:31.762 12:59:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:31.762 12:59:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:31.762 12:59:50 -- bdev/bdev_raid.sh@287 -- # killprocess 115096 00:14:31.762 12:59:50 -- common/autotest_common.sh@926 -- # '[' -z 115096 ']' 00:14:31.762 12:59:50 -- common/autotest_common.sh@930 -- # kill -0 115096 00:14:31.762 12:59:50 -- common/autotest_common.sh@931 -- # uname 00:14:31.762 12:59:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:31.762 12:59:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115096 00:14:31.762 12:59:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:31.762 killing process with pid 115096 00:14:31.762 12:59:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:31.762 12:59:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115096' 00:14:31.762 12:59:50 -- common/autotest_common.sh@945 -- # kill 115096 00:14:31.762 12:59:50 -- common/autotest_common.sh@950 -- # wait 115096 00:14:31.762 [2024-06-11 12:59:50.548999] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.762 [2024-06-11 12:59:50.549107] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.700 ************************************ 00:14:32.700 END TEST raid_state_function_test_sb 00:14:32.700 ************************************ 00:14:32.700 12:59:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:32.700 00:14:32.700 real 0m10.811s 00:14:32.700 user 0m19.083s 00:14:32.700 sys 0m1.195s 00:14:32.700 12:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.700 12:59:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.700 12:59:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:32.700 12:59:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:32.700 12:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:32.700 12:59:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.959 ************************************ 00:14:32.959 START TEST raid_superblock_test 00:14:32.959 ************************************ 00:14:32.959 12:59:51 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=115448 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115448 /var/tmp/spdk-raid.sock 00:14:32.959 12:59:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:32.959 12:59:51 -- common/autotest_common.sh@819 -- # '[' -z 115448 ']' 00:14:32.959 12:59:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:32.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:32.959 12:59:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:32.959 12:59:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:32.959 12:59:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:32.959 12:59:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.959 [2024-06-11 12:59:51.603255] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:32.959 [2024-06-11 12:59:51.603446] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115448 ] 00:14:32.959 [2024-06-11 12:59:51.768754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.218 [2024-06-11 12:59:51.934464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.478 [2024-06-11 12:59:52.110060] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.738 12:59:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:33.738 12:59:52 -- common/autotest_common.sh@852 -- # return 0 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.738 12:59:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:33.996 malloc1 00:14:33.996 12:59:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:34.256 [2024-06-11 12:59:52.919827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:34.256 [2024-06-11 12:59:52.919947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.256 [2024-06-11 12:59:52.919979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:34.256 [2024-06-11 12:59:52.920088] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.256 [2024-06-11 12:59:52.922092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.256 [2024-06-11 12:59:52.922138] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:34.256 pt1 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.256 12:59:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:34.515 malloc2 00:14:34.515 12:59:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:34.515 [2024-06-11 12:59:53.338971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.515 [2024-06-11 12:59:53.339082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.515 [2024-06-11 12:59:53.339123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:34.515 [2024-06-11 12:59:53.339222] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.515 [2024-06-11 12:59:53.341269] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.515 [2024-06-11 12:59:53.341316] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.515 pt2 00:14:34.515 12:59:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:34.515 12:59:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:34.515 12:59:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:34.773 [2024-06-11 12:59:53.519096] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:34.773 [2024-06-11 12:59:53.520945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.773 [2024-06-11 12:59:53.521254] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:34.773 [2024-06-11 12:59:53.521279] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:34.773 [2024-06-11 12:59:53.521410] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:34.773 [2024-06-11 12:59:53.521813] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:34.773 [2024-06-11 12:59:53.521870] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:34.773 [2024-06-11 12:59:53.522073] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.773 12:59:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.032 12:59:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.032 "name": "raid_bdev1", 00:14:35.032 "uuid": "ab0d9bb3-a2e1-42dc-82b5-dfcdada5950b", 00:14:35.032 "strip_size_kb": 64, 00:14:35.032 "state": "online", 00:14:35.032 "raid_level": "raid0", 00:14:35.032 "superblock": true, 00:14:35.032 "num_base_bdevs": 2, 00:14:35.032 "num_base_bdevs_discovered": 2, 00:14:35.032 "num_base_bdevs_operational": 2, 00:14:35.032 "base_bdevs_list": [ 00:14:35.032 { 00:14:35.032 "name": "pt1", 00:14:35.032 "uuid": "5aea292f-94af-550a-90f2-46e8284df79f", 00:14:35.032 "is_configured": true, 00:14:35.032 "data_offset": 2048, 00:14:35.032 "data_size": 63488 00:14:35.032 }, 00:14:35.032 { 00:14:35.032 "name": "pt2", 00:14:35.032 "uuid": "4f358d3c-b63d-5a6a-a68d-81dee09d6701", 00:14:35.032 "is_configured": true, 00:14:35.032 "data_offset": 2048, 00:14:35.032 "data_size": 63488 00:14:35.032 } 00:14:35.032 ] 00:14:35.032 }' 00:14:35.032 12:59:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.032 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:14:35.600 12:59:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:35.600 12:59:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:35.859 [2024-06-11 12:59:54.595447] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.859 12:59:54 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ab0d9bb3-a2e1-42dc-82b5-dfcdada5950b 00:14:35.859 12:59:54 -- bdev/bdev_raid.sh@380 -- # '[' -z ab0d9bb3-a2e1-42dc-82b5-dfcdada5950b ']' 00:14:35.859 12:59:54 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:36.117 [2024-06-11 12:59:54.787266] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.117 [2024-06-11 12:59:54.787293] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.117 [2024-06-11 12:59:54.787406] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.117 [2024-06-11 12:59:54.787495] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.117 [2024-06-11 12:59:54.787516] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:36.117 12:59:54 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.117 12:59:54 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:36.376 12:59:54 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:36.376 12:59:54 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:36.376 12:59:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.376 12:59:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:36.376 12:59:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.376 12:59:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:36.634 12:59:55 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:36.634 12:59:55 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:36.893 12:59:55 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:36.893 12:59:55 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:36.893 12:59:55 -- common/autotest_common.sh@640 -- # local es=0 00:14:36.893 12:59:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:36.893 12:59:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.893 12:59:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.893 12:59:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.893 12:59:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.893 12:59:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.893 12:59:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.893 12:59:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.893 12:59:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:36.893 12:59:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:36.893 [2024-06-11 12:59:55.719418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:36.893 [2024-06-11 12:59:55.721082] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:36.893 [2024-06-11 12:59:55.721148] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:36.893 [2024-06-11 12:59:55.721229] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:36.893 [2024-06-11 12:59:55.721262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.893 [2024-06-11 12:59:55.721273] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:14:36.893 request: 00:14:36.893 { 00:14:36.893 "name": "raid_bdev1", 00:14:36.893 "raid_level": "raid0", 00:14:36.893 "base_bdevs": [ 00:14:36.893 "malloc1", 00:14:36.893 "malloc2" 00:14:36.893 ], 00:14:36.893 "superblock": false, 00:14:36.893 "strip_size_kb": 64, 00:14:36.893 "method": "bdev_raid_create", 00:14:36.893 "req_id": 1 00:14:36.893 } 00:14:36.893 Got JSON-RPC error response 00:14:36.893 response: 00:14:36.893 { 00:14:36.893 "code": -17, 00:14:36.893 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:36.893 } 00:14:37.152 12:59:55 -- common/autotest_common.sh@643 -- # es=1 00:14:37.152 12:59:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:37.152 12:59:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:37.152 12:59:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:37.152 12:59:55 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.152 12:59:55 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:37.152 12:59:55 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:37.152 12:59:55 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:37.152 12:59:55 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:37.411 [2024-06-11 12:59:56.095456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:37.411 [2024-06-11 12:59:56.095567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.411 [2024-06-11 12:59:56.095602] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:37.411 [2024-06-11 12:59:56.095633] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.411 [2024-06-11 12:59:56.097730] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.411 [2024-06-11 12:59:56.097795] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:37.411 [2024-06-11 12:59:56.097916] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:37.411 [2024-06-11 12:59:56.098025] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.411 pt1 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.411 12:59:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.670 12:59:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.670 "name": "raid_bdev1", 00:14:37.670 "uuid": "ab0d9bb3-a2e1-42dc-82b5-dfcdada5950b", 00:14:37.670 "strip_size_kb": 64, 00:14:37.670 "state": "configuring", 00:14:37.670 "raid_level": "raid0", 00:14:37.670 "superblock": true, 00:14:37.670 "num_base_bdevs": 2, 00:14:37.670 "num_base_bdevs_discovered": 1, 00:14:37.670 "num_base_bdevs_operational": 2, 00:14:37.670 "base_bdevs_list": [ 00:14:37.670 { 00:14:37.670 "name": "pt1", 00:14:37.670 "uuid": "5aea292f-94af-550a-90f2-46e8284df79f", 00:14:37.670 "is_configured": true, 00:14:37.670 "data_offset": 2048, 00:14:37.670 "data_size": 63488 00:14:37.670 }, 00:14:37.670 { 00:14:37.670 "name": null, 00:14:37.670 "uuid": "4f358d3c-b63d-5a6a-a68d-81dee09d6701", 00:14:37.670 "is_configured": false, 00:14:37.670 "data_offset": 2048, 00:14:37.670 "data_size": 63488 00:14:37.670 } 00:14:37.670 ] 00:14:37.670 }' 00:14:37.670 12:59:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.670 12:59:56 -- common/autotest_common.sh@10 -- # set +x 00:14:38.237 12:59:57 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:38.237 12:59:57 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:38.237 12:59:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:38.237 12:59:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.495 [2024-06-11 12:59:57.207772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.495 [2024-06-11 12:59:57.207892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.495 [2024-06-11 12:59:57.207931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:38.495 [2024-06-11 12:59:57.207958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.495 [2024-06-11 12:59:57.208516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.495 [2024-06-11 12:59:57.208579] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.495 [2024-06-11 12:59:57.208709] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:38.495 [2024-06-11 12:59:57.208735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.495 [2024-06-11 12:59:57.208897] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:14:38.495 [2024-06-11 12:59:57.208911] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:38.495 [2024-06-11 12:59:57.209036] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:38.495 [2024-06-11 12:59:57.209360] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:14:38.495 [2024-06-11 12:59:57.209385] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:14:38.495 [2024-06-11 12:59:57.209563] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.495 pt2 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.495 12:59:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.754 12:59:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.754 "name": "raid_bdev1", 00:14:38.754 "uuid": "ab0d9bb3-a2e1-42dc-82b5-dfcdada5950b", 00:14:38.754 "strip_size_kb": 64, 00:14:38.754 "state": "online", 00:14:38.754 "raid_level": "raid0", 00:14:38.754 "superblock": true, 00:14:38.754 "num_base_bdevs": 2, 00:14:38.754 "num_base_bdevs_discovered": 2, 00:14:38.754 "num_base_bdevs_operational": 2, 00:14:38.754 "base_bdevs_list": [ 00:14:38.754 { 00:14:38.754 "name": "pt1", 00:14:38.754 "uuid": "5aea292f-94af-550a-90f2-46e8284df79f", 00:14:38.754 "is_configured": true, 00:14:38.754 "data_offset": 2048, 00:14:38.754 "data_size": 63488 00:14:38.754 }, 00:14:38.754 { 00:14:38.754 "name": "pt2", 00:14:38.754 "uuid": "4f358d3c-b63d-5a6a-a68d-81dee09d6701", 00:14:38.754 "is_configured": true, 00:14:38.754 "data_offset": 2048, 00:14:38.754 "data_size": 63488 00:14:38.754 } 00:14:38.754 ] 00:14:38.754 }' 00:14:38.754 12:59:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.754 12:59:57 -- common/autotest_common.sh@10 -- # set +x 00:14:39.320 12:59:58 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:39.320 12:59:58 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:39.578 [2024-06-11 12:59:58.312217] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.578 12:59:58 -- bdev/bdev_raid.sh@430 -- # '[' ab0d9bb3-a2e1-42dc-82b5-dfcdada5950b '!=' ab0d9bb3-a2e1-42dc-82b5-dfcdada5950b ']' 00:14:39.578 12:59:58 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:39.578 12:59:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:39.578 12:59:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:39.578 12:59:58 -- bdev/bdev_raid.sh@511 -- # killprocess 115448 00:14:39.578 12:59:58 -- common/autotest_common.sh@926 -- # '[' -z 115448 ']' 00:14:39.578 12:59:58 -- common/autotest_common.sh@930 -- # kill -0 115448 00:14:39.578 12:59:58 -- common/autotest_common.sh@931 -- # uname 00:14:39.578 12:59:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:39.578 12:59:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115448 00:14:39.579 killing process with pid 115448 00:14:39.579 12:59:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:39.579 12:59:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:39.579 12:59:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115448' 00:14:39.579 12:59:58 -- common/autotest_common.sh@945 -- # kill 115448 00:14:39.579 12:59:58 -- common/autotest_common.sh@950 -- # wait 115448 00:14:39.579 [2024-06-11 12:59:58.341486] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.579 [2024-06-11 12:59:58.341579] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.579 [2024-06-11 12:59:58.341661] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.579 [2024-06-11 12:59:58.341681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:14:39.837 [2024-06-11 12:59:58.472968] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.772 ************************************ 00:14:40.773 END TEST raid_superblock_test 00:14:40.773 ************************************ 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:40.773 00:14:40.773 real 0m7.870s 00:14:40.773 user 0m13.658s 00:14:40.773 sys 0m0.846s 00:14:40.773 12:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.773 12:59:59 -- common/autotest_common.sh@10 -- # set +x 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:40.773 12:59:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:40.773 12:59:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:40.773 12:59:59 -- common/autotest_common.sh@10 -- # set +x 00:14:40.773 ************************************ 00:14:40.773 START TEST raid_state_function_test 00:14:40.773 ************************************ 00:14:40.773 12:59:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=115705 00:14:40.773 Process raid pid: 115705 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115705' 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115705 /var/tmp/spdk-raid.sock 00:14:40.773 12:59:59 -- common/autotest_common.sh@819 -- # '[' -z 115705 ']' 00:14:40.773 12:59:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:40.773 12:59:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:40.773 12:59:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:40.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:40.773 12:59:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:40.773 12:59:59 -- common/autotest_common.sh@10 -- # set +x 00:14:40.773 12:59:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:40.773 [2024-06-11 12:59:59.524880] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:40.773 [2024-06-11 12:59:59.525243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.031 [2024-06-11 12:59:59.689558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.031 [2024-06-11 12:59:59.854553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.290 [2024-06-11 13:00:00.024900] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.858 13:00:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:41.858 13:00:00 -- common/autotest_common.sh@852 -- # return 0 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:41.858 [2024-06-11 13:00:00.629313] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.858 [2024-06-11 13:00:00.629416] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.858 [2024-06-11 13:00:00.629473] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.858 [2024-06-11 13:00:00.629494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.858 13:00:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.117 13:00:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.117 "name": "Existed_Raid", 00:14:42.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.117 "strip_size_kb": 64, 00:14:42.117 "state": "configuring", 00:14:42.117 "raid_level": "concat", 00:14:42.117 "superblock": false, 00:14:42.117 "num_base_bdevs": 2, 00:14:42.117 "num_base_bdevs_discovered": 0, 00:14:42.117 "num_base_bdevs_operational": 2, 00:14:42.117 "base_bdevs_list": [ 00:14:42.117 { 00:14:42.117 "name": "BaseBdev1", 00:14:42.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.117 "is_configured": false, 00:14:42.117 "data_offset": 0, 00:14:42.117 "data_size": 0 00:14:42.117 }, 00:14:42.117 { 00:14:42.117 "name": "BaseBdev2", 00:14:42.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.118 "is_configured": false, 00:14:42.118 "data_offset": 0, 00:14:42.118 "data_size": 0 00:14:42.118 } 00:14:42.118 ] 00:14:42.118 }' 00:14:42.118 13:00:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.118 13:00:00 -- common/autotest_common.sh@10 -- # set +x 00:14:42.685 13:00:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.943 [2024-06-11 13:00:01.685407] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.943 [2024-06-11 13:00:01.685476] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:42.943 13:00:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:43.202 [2024-06-11 13:00:01.865447] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.202 [2024-06-11 13:00:01.865530] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.202 [2024-06-11 13:00:01.865559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.202 [2024-06-11 13:00:01.865580] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.202 13:00:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:43.459 [2024-06-11 13:00:02.138884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.459 BaseBdev1 00:14:43.459 13:00:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:43.459 13:00:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:43.459 13:00:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:43.459 13:00:02 -- common/autotest_common.sh@889 -- # local i 00:14:43.459 13:00:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:43.459 13:00:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:43.459 13:00:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.716 13:00:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.973 [ 00:14:43.973 { 00:14:43.973 "name": "BaseBdev1", 00:14:43.973 "aliases": [ 00:14:43.973 "5ed981ef-d0ab-4237-b020-46cbea8bfe6c" 00:14:43.973 ], 00:14:43.973 "product_name": "Malloc disk", 00:14:43.973 "block_size": 512, 00:14:43.973 "num_blocks": 65536, 00:14:43.973 "uuid": "5ed981ef-d0ab-4237-b020-46cbea8bfe6c", 00:14:43.973 "assigned_rate_limits": { 00:14:43.973 "rw_ios_per_sec": 0, 00:14:43.973 "rw_mbytes_per_sec": 0, 00:14:43.973 "r_mbytes_per_sec": 0, 00:14:43.973 "w_mbytes_per_sec": 0 00:14:43.973 }, 00:14:43.973 "claimed": true, 00:14:43.973 "claim_type": "exclusive_write", 00:14:43.973 "zoned": false, 00:14:43.973 "supported_io_types": { 00:14:43.973 "read": true, 00:14:43.973 "write": true, 00:14:43.973 "unmap": true, 00:14:43.973 "write_zeroes": true, 00:14:43.973 "flush": true, 00:14:43.973 "reset": true, 00:14:43.973 "compare": false, 00:14:43.973 "compare_and_write": false, 00:14:43.973 "abort": true, 00:14:43.973 "nvme_admin": false, 00:14:43.973 "nvme_io": false 00:14:43.973 }, 00:14:43.973 "memory_domains": [ 00:14:43.973 { 00:14:43.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.973 "dma_device_type": 2 00:14:43.973 } 00:14:43.973 ], 00:14:43.973 "driver_specific": {} 00:14:43.973 } 00:14:43.973 ] 00:14:43.973 13:00:02 -- common/autotest_common.sh@895 -- # return 0 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:43.974 "name": "Existed_Raid", 00:14:43.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.974 "strip_size_kb": 64, 00:14:43.974 "state": "configuring", 00:14:43.974 "raid_level": "concat", 00:14:43.974 "superblock": false, 00:14:43.974 "num_base_bdevs": 2, 00:14:43.974 "num_base_bdevs_discovered": 1, 00:14:43.974 "num_base_bdevs_operational": 2, 00:14:43.974 "base_bdevs_list": [ 00:14:43.974 { 00:14:43.974 "name": "BaseBdev1", 00:14:43.974 "uuid": "5ed981ef-d0ab-4237-b020-46cbea8bfe6c", 00:14:43.974 "is_configured": true, 00:14:43.974 "data_offset": 0, 00:14:43.974 "data_size": 65536 00:14:43.974 }, 00:14:43.974 { 00:14:43.974 "name": "BaseBdev2", 00:14:43.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.974 "is_configured": false, 00:14:43.974 "data_offset": 0, 00:14:43.974 "data_size": 0 00:14:43.974 } 00:14:43.974 ] 00:14:43.974 }' 00:14:43.974 13:00:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:43.974 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.904 13:00:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:44.905 [2024-06-11 13:00:03.659270] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.905 [2024-06-11 13:00:03.659350] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:44.905 13:00:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:44.905 13:00:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:45.162 [2024-06-11 13:00:03.855564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.162 [2024-06-11 13:00:03.858482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.162 [2024-06-11 13:00:03.858590] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.162 13:00:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:45.162 13:00:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:45.162 13:00:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:45.162 13:00:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:45.162 13:00:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.162 13:00:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:45.162 13:00:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.163 13:00:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:45.163 13:00:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.163 13:00:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.163 13:00:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.163 13:00:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.163 13:00:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.163 13:00:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.420 13:00:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.420 "name": "Existed_Raid", 00:14:45.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.420 "strip_size_kb": 64, 00:14:45.420 "state": "configuring", 00:14:45.420 "raid_level": "concat", 00:14:45.420 "superblock": false, 00:14:45.421 "num_base_bdevs": 2, 00:14:45.421 "num_base_bdevs_discovered": 1, 00:14:45.421 "num_base_bdevs_operational": 2, 00:14:45.421 "base_bdevs_list": [ 00:14:45.421 { 00:14:45.421 "name": "BaseBdev1", 00:14:45.421 "uuid": "5ed981ef-d0ab-4237-b020-46cbea8bfe6c", 00:14:45.421 "is_configured": true, 00:14:45.421 "data_offset": 0, 00:14:45.421 "data_size": 65536 00:14:45.421 }, 00:14:45.421 { 00:14:45.421 "name": "BaseBdev2", 00:14:45.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.421 "is_configured": false, 00:14:45.421 "data_offset": 0, 00:14:45.421 "data_size": 0 00:14:45.421 } 00:14:45.421 ] 00:14:45.421 }' 00:14:45.421 13:00:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.421 13:00:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.985 13:00:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.242 [2024-06-11 13:00:05.012976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.242 [2024-06-11 13:00:05.013060] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:46.242 [2024-06-11 13:00:05.013082] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:46.242 [2024-06-11 13:00:05.013247] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:46.242 [2024-06-11 13:00:05.013662] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:46.242 [2024-06-11 13:00:05.013686] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:46.242 [2024-06-11 13:00:05.013980] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.242 BaseBdev2 00:14:46.242 13:00:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:46.242 13:00:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:46.242 13:00:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:46.242 13:00:05 -- common/autotest_common.sh@889 -- # local i 00:14:46.242 13:00:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:46.242 13:00:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:46.242 13:00:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.500 13:00:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.758 [ 00:14:46.758 { 00:14:46.758 "name": "BaseBdev2", 00:14:46.758 "aliases": [ 00:14:46.758 "67d71855-8e5b-433c-9a8d-03fd452b06c0" 00:14:46.758 ], 00:14:46.758 "product_name": "Malloc disk", 00:14:46.758 "block_size": 512, 00:14:46.758 "num_blocks": 65536, 00:14:46.758 "uuid": "67d71855-8e5b-433c-9a8d-03fd452b06c0", 00:14:46.758 "assigned_rate_limits": { 00:14:46.758 "rw_ios_per_sec": 0, 00:14:46.758 "rw_mbytes_per_sec": 0, 00:14:46.758 "r_mbytes_per_sec": 0, 00:14:46.758 "w_mbytes_per_sec": 0 00:14:46.758 }, 00:14:46.758 "claimed": true, 00:14:46.758 "claim_type": "exclusive_write", 00:14:46.758 "zoned": false, 00:14:46.758 "supported_io_types": { 00:14:46.758 "read": true, 00:14:46.758 "write": true, 00:14:46.758 "unmap": true, 00:14:46.758 "write_zeroes": true, 00:14:46.758 "flush": true, 00:14:46.758 "reset": true, 00:14:46.758 "compare": false, 00:14:46.758 "compare_and_write": false, 00:14:46.758 "abort": true, 00:14:46.758 "nvme_admin": false, 00:14:46.758 "nvme_io": false 00:14:46.758 }, 00:14:46.758 "memory_domains": [ 00:14:46.758 { 00:14:46.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.758 "dma_device_type": 2 00:14:46.758 } 00:14:46.758 ], 00:14:46.758 "driver_specific": {} 00:14:46.758 } 00:14:46.758 ] 00:14:46.758 13:00:05 -- common/autotest_common.sh@895 -- # return 0 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.758 13:00:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.015 13:00:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.015 "name": "Existed_Raid", 00:14:47.015 "uuid": "48d444e2-ce47-4e49-8950-a4f418cfda1d", 00:14:47.015 "strip_size_kb": 64, 00:14:47.015 "state": "online", 00:14:47.015 "raid_level": "concat", 00:14:47.015 "superblock": false, 00:14:47.015 "num_base_bdevs": 2, 00:14:47.015 "num_base_bdevs_discovered": 2, 00:14:47.015 "num_base_bdevs_operational": 2, 00:14:47.015 "base_bdevs_list": [ 00:14:47.015 { 00:14:47.015 "name": "BaseBdev1", 00:14:47.015 "uuid": "5ed981ef-d0ab-4237-b020-46cbea8bfe6c", 00:14:47.015 "is_configured": true, 00:14:47.015 "data_offset": 0, 00:14:47.015 "data_size": 65536 00:14:47.015 }, 00:14:47.015 { 00:14:47.015 "name": "BaseBdev2", 00:14:47.015 "uuid": "67d71855-8e5b-433c-9a8d-03fd452b06c0", 00:14:47.015 "is_configured": true, 00:14:47.015 "data_offset": 0, 00:14:47.015 "data_size": 65536 00:14:47.015 } 00:14:47.015 ] 00:14:47.015 }' 00:14:47.015 13:00:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.015 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:14:47.581 13:00:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:47.840 [2024-06-11 13:00:06.641496] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.840 [2024-06-11 13:00:06.641533] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.840 [2024-06-11 13:00:06.641617] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.100 "name": "Existed_Raid", 00:14:48.100 "uuid": "48d444e2-ce47-4e49-8950-a4f418cfda1d", 00:14:48.100 "strip_size_kb": 64, 00:14:48.100 "state": "offline", 00:14:48.100 "raid_level": "concat", 00:14:48.100 "superblock": false, 00:14:48.100 "num_base_bdevs": 2, 00:14:48.100 "num_base_bdevs_discovered": 1, 00:14:48.100 "num_base_bdevs_operational": 1, 00:14:48.100 "base_bdevs_list": [ 00:14:48.100 { 00:14:48.100 "name": null, 00:14:48.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.100 "is_configured": false, 00:14:48.100 "data_offset": 0, 00:14:48.100 "data_size": 65536 00:14:48.100 }, 00:14:48.100 { 00:14:48.100 "name": "BaseBdev2", 00:14:48.100 "uuid": "67d71855-8e5b-433c-9a8d-03fd452b06c0", 00:14:48.100 "is_configured": true, 00:14:48.100 "data_offset": 0, 00:14:48.100 "data_size": 65536 00:14:48.100 } 00:14:48.100 ] 00:14:48.100 }' 00:14:48.100 13:00:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.100 13:00:06 -- common/autotest_common.sh@10 -- # set +x 00:14:49.049 13:00:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:49.049 13:00:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:49.049 13:00:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:49.049 13:00:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.049 13:00:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:49.049 13:00:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:49.049 13:00:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:49.312 [2024-06-11 13:00:08.080450] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.312 [2024-06-11 13:00:08.080534] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:49.575 13:00:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:49.575 13:00:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:49.575 13:00:08 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.575 13:00:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:49.575 13:00:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:49.575 13:00:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:49.575 13:00:08 -- bdev/bdev_raid.sh@287 -- # killprocess 115705 00:14:49.575 13:00:08 -- common/autotest_common.sh@926 -- # '[' -z 115705 ']' 00:14:49.575 13:00:08 -- common/autotest_common.sh@930 -- # kill -0 115705 00:14:49.575 13:00:08 -- common/autotest_common.sh@931 -- # uname 00:14:49.575 13:00:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:49.575 13:00:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115705 00:14:49.835 killing process with pid 115705 00:14:49.835 13:00:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:49.835 13:00:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:49.835 13:00:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115705' 00:14:49.835 13:00:08 -- common/autotest_common.sh@945 -- # kill 115705 00:14:49.835 13:00:08 -- common/autotest_common.sh@950 -- # wait 115705 00:14:49.835 [2024-06-11 13:00:08.430182] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.835 [2024-06-11 13:00:08.430340] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.770 ************************************ 00:14:50.770 END TEST raid_state_function_test 00:14:50.770 ************************************ 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:50.770 00:14:50.770 real 0m9.983s 00:14:50.770 user 0m17.551s 00:14:50.770 sys 0m1.128s 00:14:50.770 13:00:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.770 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:50.770 13:00:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:50.770 13:00:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:50.770 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.770 ************************************ 00:14:50.770 START TEST raid_state_function_test_sb 00:14:50.770 ************************************ 00:14:50.770 13:00:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:50.770 13:00:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=116032 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116032' 00:14:50.771 Process raid pid: 116032 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:50.771 13:00:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116032 /var/tmp/spdk-raid.sock 00:14:50.771 13:00:09 -- common/autotest_common.sh@819 -- # '[' -z 116032 ']' 00:14:50.771 13:00:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:50.771 13:00:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:50.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:50.771 13:00:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:50.771 13:00:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:50.771 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.771 [2024-06-11 13:00:09.569712] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:50.771 [2024-06-11 13:00:09.569908] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.029 [2024-06-11 13:00:09.742847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.288 [2024-06-11 13:00:09.982068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.546 [2024-06-11 13:00:10.171551] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.806 13:00:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:51.806 13:00:10 -- common/autotest_common.sh@852 -- # return 0 00:14:51.806 13:00:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:52.065 [2024-06-11 13:00:10.695275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.065 [2024-06-11 13:00:10.695371] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.065 [2024-06-11 13:00:10.695385] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.065 [2024-06-11 13:00:10.695406] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.065 13:00:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.324 13:00:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:52.324 "name": "Existed_Raid", 00:14:52.324 "uuid": "3919608a-8290-4ed0-a6aa-d94aee1c0bfd", 00:14:52.324 "strip_size_kb": 64, 00:14:52.324 "state": "configuring", 00:14:52.324 "raid_level": "concat", 00:14:52.324 "superblock": true, 00:14:52.324 "num_base_bdevs": 2, 00:14:52.324 "num_base_bdevs_discovered": 0, 00:14:52.324 "num_base_bdevs_operational": 2, 00:14:52.324 "base_bdevs_list": [ 00:14:52.324 { 00:14:52.324 "name": "BaseBdev1", 00:14:52.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.324 "is_configured": false, 00:14:52.324 "data_offset": 0, 00:14:52.324 "data_size": 0 00:14:52.324 }, 00:14:52.324 { 00:14:52.324 "name": "BaseBdev2", 00:14:52.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.324 "is_configured": false, 00:14:52.324 "data_offset": 0, 00:14:52.324 "data_size": 0 00:14:52.324 } 00:14:52.324 ] 00:14:52.324 }' 00:14:52.324 13:00:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:52.324 13:00:10 -- common/autotest_common.sh@10 -- # set +x 00:14:52.892 13:00:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:53.150 [2024-06-11 13:00:11.807305] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.150 [2024-06-11 13:00:11.807336] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:53.150 13:00:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:53.408 [2024-06-11 13:00:12.059400] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.408 [2024-06-11 13:00:12.059465] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.408 [2024-06-11 13:00:12.059477] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.408 [2024-06-11 13:00:12.059498] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.408 13:00:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:53.666 [2024-06-11 13:00:12.305891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.666 BaseBdev1 00:14:53.666 13:00:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:53.666 13:00:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:53.666 13:00:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:53.666 13:00:12 -- common/autotest_common.sh@889 -- # local i 00:14:53.666 13:00:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:53.666 13:00:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:53.666 13:00:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.925 13:00:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:53.925 [ 00:14:53.925 { 00:14:53.925 "name": "BaseBdev1", 00:14:53.925 "aliases": [ 00:14:53.925 "cd7c11a5-5657-42e9-85b5-5a41dadd443f" 00:14:53.925 ], 00:14:53.925 "product_name": "Malloc disk", 00:14:53.925 "block_size": 512, 00:14:53.925 "num_blocks": 65536, 00:14:53.925 "uuid": "cd7c11a5-5657-42e9-85b5-5a41dadd443f", 00:14:53.925 "assigned_rate_limits": { 00:14:53.925 "rw_ios_per_sec": 0, 00:14:53.925 "rw_mbytes_per_sec": 0, 00:14:53.925 "r_mbytes_per_sec": 0, 00:14:53.925 "w_mbytes_per_sec": 0 00:14:53.925 }, 00:14:53.925 "claimed": true, 00:14:53.925 "claim_type": "exclusive_write", 00:14:53.925 "zoned": false, 00:14:53.925 "supported_io_types": { 00:14:53.925 "read": true, 00:14:53.925 "write": true, 00:14:53.925 "unmap": true, 00:14:53.925 "write_zeroes": true, 00:14:53.925 "flush": true, 00:14:53.925 "reset": true, 00:14:53.925 "compare": false, 00:14:53.925 "compare_and_write": false, 00:14:53.925 "abort": true, 00:14:53.925 "nvme_admin": false, 00:14:53.925 "nvme_io": false 00:14:53.925 }, 00:14:53.925 "memory_domains": [ 00:14:53.925 { 00:14:53.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.925 "dma_device_type": 2 00:14:53.925 } 00:14:53.925 ], 00:14:53.925 "driver_specific": {} 00:14:53.925 } 00:14:53.925 ] 00:14:53.925 13:00:12 -- common/autotest_common.sh@895 -- # return 0 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.925 13:00:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.184 13:00:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.184 "name": "Existed_Raid", 00:14:54.184 "uuid": "b0905d68-f357-4e41-8df2-846e751b2acc", 00:14:54.184 "strip_size_kb": 64, 00:14:54.184 "state": "configuring", 00:14:54.184 "raid_level": "concat", 00:14:54.184 "superblock": true, 00:14:54.184 "num_base_bdevs": 2, 00:14:54.184 "num_base_bdevs_discovered": 1, 00:14:54.184 "num_base_bdevs_operational": 2, 00:14:54.184 "base_bdevs_list": [ 00:14:54.184 { 00:14:54.184 "name": "BaseBdev1", 00:14:54.184 "uuid": "cd7c11a5-5657-42e9-85b5-5a41dadd443f", 00:14:54.184 "is_configured": true, 00:14:54.184 "data_offset": 2048, 00:14:54.184 "data_size": 63488 00:14:54.184 }, 00:14:54.184 { 00:14:54.184 "name": "BaseBdev2", 00:14:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.184 "is_configured": false, 00:14:54.184 "data_offset": 0, 00:14:54.184 "data_size": 0 00:14:54.184 } 00:14:54.184 ] 00:14:54.184 }' 00:14:54.184 13:00:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.184 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:14:54.752 13:00:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:55.012 [2024-06-11 13:00:13.758166] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.012 [2024-06-11 13:00:13.758213] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:55.012 13:00:13 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:55.012 13:00:13 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:55.270 13:00:14 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.529 BaseBdev1 00:14:55.529 13:00:14 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:55.529 13:00:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:55.529 13:00:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:55.529 13:00:14 -- common/autotest_common.sh@889 -- # local i 00:14:55.529 13:00:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:55.529 13:00:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:55.529 13:00:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:55.788 13:00:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.046 [ 00:14:56.046 { 00:14:56.046 "name": "BaseBdev1", 00:14:56.046 "aliases": [ 00:14:56.046 "95af0e0f-220a-41ce-b9ac-4a599bf7f503" 00:14:56.046 ], 00:14:56.046 "product_name": "Malloc disk", 00:14:56.046 "block_size": 512, 00:14:56.046 "num_blocks": 65536, 00:14:56.046 "uuid": "95af0e0f-220a-41ce-b9ac-4a599bf7f503", 00:14:56.046 "assigned_rate_limits": { 00:14:56.046 "rw_ios_per_sec": 0, 00:14:56.046 "rw_mbytes_per_sec": 0, 00:14:56.046 "r_mbytes_per_sec": 0, 00:14:56.046 "w_mbytes_per_sec": 0 00:14:56.046 }, 00:14:56.046 "claimed": false, 00:14:56.046 "zoned": false, 00:14:56.046 "supported_io_types": { 00:14:56.046 "read": true, 00:14:56.046 "write": true, 00:14:56.046 "unmap": true, 00:14:56.046 "write_zeroes": true, 00:14:56.046 "flush": true, 00:14:56.046 "reset": true, 00:14:56.046 "compare": false, 00:14:56.046 "compare_and_write": false, 00:14:56.046 "abort": true, 00:14:56.046 "nvme_admin": false, 00:14:56.046 "nvme_io": false 00:14:56.046 }, 00:14:56.047 "memory_domains": [ 00:14:56.047 { 00:14:56.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.047 "dma_device_type": 2 00:14:56.047 } 00:14:56.047 ], 00:14:56.047 "driver_specific": {} 00:14:56.047 } 00:14:56.047 ] 00:14:56.047 13:00:14 -- common/autotest_common.sh@895 -- # return 0 00:14:56.047 13:00:14 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:56.305 [2024-06-11 13:00:14.897479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.305 [2024-06-11 13:00:14.899321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.305 [2024-06-11 13:00:14.899387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.305 13:00:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.564 13:00:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:56.564 "name": "Existed_Raid", 00:14:56.564 "uuid": "71df3334-d433-42e9-ba81-40f879afa30e", 00:14:56.564 "strip_size_kb": 64, 00:14:56.564 "state": "configuring", 00:14:56.564 "raid_level": "concat", 00:14:56.564 "superblock": true, 00:14:56.564 "num_base_bdevs": 2, 00:14:56.564 "num_base_bdevs_discovered": 1, 00:14:56.564 "num_base_bdevs_operational": 2, 00:14:56.564 "base_bdevs_list": [ 00:14:56.564 { 00:14:56.564 "name": "BaseBdev1", 00:14:56.564 "uuid": "95af0e0f-220a-41ce-b9ac-4a599bf7f503", 00:14:56.564 "is_configured": true, 00:14:56.564 "data_offset": 2048, 00:14:56.564 "data_size": 63488 00:14:56.564 }, 00:14:56.564 { 00:14:56.564 "name": "BaseBdev2", 00:14:56.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.564 "is_configured": false, 00:14:56.564 "data_offset": 0, 00:14:56.564 "data_size": 0 00:14:56.564 } 00:14:56.564 ] 00:14:56.564 }' 00:14:56.564 13:00:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:56.564 13:00:15 -- common/autotest_common.sh@10 -- # set +x 00:14:57.149 13:00:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.409 [2024-06-11 13:00:16.105289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.409 [2024-06-11 13:00:16.105549] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:14:57.409 [2024-06-11 13:00:16.105566] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.409 BaseBdev2 00:14:57.409 [2024-06-11 13:00:16.105691] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:57.409 [2024-06-11 13:00:16.106038] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:14:57.409 [2024-06-11 13:00:16.106060] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:14:57.409 [2024-06-11 13:00:16.106204] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.409 13:00:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:57.409 13:00:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:57.409 13:00:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:57.409 13:00:16 -- common/autotest_common.sh@889 -- # local i 00:14:57.409 13:00:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:57.409 13:00:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:57.409 13:00:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.668 13:00:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.926 [ 00:14:57.926 { 00:14:57.926 "name": "BaseBdev2", 00:14:57.926 "aliases": [ 00:14:57.926 "b5f2339e-1625-4878-bb10-9408b10a9841" 00:14:57.926 ], 00:14:57.926 "product_name": "Malloc disk", 00:14:57.926 "block_size": 512, 00:14:57.926 "num_blocks": 65536, 00:14:57.926 "uuid": "b5f2339e-1625-4878-bb10-9408b10a9841", 00:14:57.926 "assigned_rate_limits": { 00:14:57.926 "rw_ios_per_sec": 0, 00:14:57.926 "rw_mbytes_per_sec": 0, 00:14:57.926 "r_mbytes_per_sec": 0, 00:14:57.927 "w_mbytes_per_sec": 0 00:14:57.927 }, 00:14:57.927 "claimed": true, 00:14:57.927 "claim_type": "exclusive_write", 00:14:57.927 "zoned": false, 00:14:57.927 "supported_io_types": { 00:14:57.927 "read": true, 00:14:57.927 "write": true, 00:14:57.927 "unmap": true, 00:14:57.927 "write_zeroes": true, 00:14:57.927 "flush": true, 00:14:57.927 "reset": true, 00:14:57.927 "compare": false, 00:14:57.927 "compare_and_write": false, 00:14:57.927 "abort": true, 00:14:57.927 "nvme_admin": false, 00:14:57.927 "nvme_io": false 00:14:57.927 }, 00:14:57.927 "memory_domains": [ 00:14:57.927 { 00:14:57.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.927 "dma_device_type": 2 00:14:57.927 } 00:14:57.927 ], 00:14:57.927 "driver_specific": {} 00:14:57.927 } 00:14:57.927 ] 00:14:57.927 13:00:16 -- common/autotest_common.sh@895 -- # return 0 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.927 13:00:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.186 13:00:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.186 "name": "Existed_Raid", 00:14:58.186 "uuid": "71df3334-d433-42e9-ba81-40f879afa30e", 00:14:58.186 "strip_size_kb": 64, 00:14:58.186 "state": "online", 00:14:58.186 "raid_level": "concat", 00:14:58.186 "superblock": true, 00:14:58.186 "num_base_bdevs": 2, 00:14:58.186 "num_base_bdevs_discovered": 2, 00:14:58.186 "num_base_bdevs_operational": 2, 00:14:58.186 "base_bdevs_list": [ 00:14:58.186 { 00:14:58.186 "name": "BaseBdev1", 00:14:58.186 "uuid": "95af0e0f-220a-41ce-b9ac-4a599bf7f503", 00:14:58.186 "is_configured": true, 00:14:58.186 "data_offset": 2048, 00:14:58.186 "data_size": 63488 00:14:58.186 }, 00:14:58.186 { 00:14:58.186 "name": "BaseBdev2", 00:14:58.186 "uuid": "b5f2339e-1625-4878-bb10-9408b10a9841", 00:14:58.186 "is_configured": true, 00:14:58.186 "data_offset": 2048, 00:14:58.186 "data_size": 63488 00:14:58.186 } 00:14:58.186 ] 00:14:58.186 }' 00:14:58.186 13:00:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.186 13:00:16 -- common/autotest_common.sh@10 -- # set +x 00:14:58.753 13:00:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:59.011 [2024-06-11 13:00:17.685685] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.011 [2024-06-11 13:00:17.685717] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.011 [2024-06-11 13:00:17.685768] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.011 13:00:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.269 13:00:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.269 "name": "Existed_Raid", 00:14:59.269 "uuid": "71df3334-d433-42e9-ba81-40f879afa30e", 00:14:59.269 "strip_size_kb": 64, 00:14:59.269 "state": "offline", 00:14:59.269 "raid_level": "concat", 00:14:59.269 "superblock": true, 00:14:59.269 "num_base_bdevs": 2, 00:14:59.269 "num_base_bdevs_discovered": 1, 00:14:59.269 "num_base_bdevs_operational": 1, 00:14:59.269 "base_bdevs_list": [ 00:14:59.269 { 00:14:59.269 "name": null, 00:14:59.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.269 "is_configured": false, 00:14:59.269 "data_offset": 2048, 00:14:59.269 "data_size": 63488 00:14:59.269 }, 00:14:59.269 { 00:14:59.269 "name": "BaseBdev2", 00:14:59.269 "uuid": "b5f2339e-1625-4878-bb10-9408b10a9841", 00:14:59.269 "is_configured": true, 00:14:59.269 "data_offset": 2048, 00:14:59.269 "data_size": 63488 00:14:59.269 } 00:14:59.269 ] 00:14:59.269 }' 00:14:59.269 13:00:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.269 13:00:17 -- common/autotest_common.sh@10 -- # set +x 00:14:59.835 13:00:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:59.835 13:00:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:59.835 13:00:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.835 13:00:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:00.093 13:00:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:00.093 13:00:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.094 13:00:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:00.351 [2024-06-11 13:00:19.127981] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.352 [2024-06-11 13:00:19.128091] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:00.610 13:00:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:00.610 13:00:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:00.610 13:00:19 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.610 13:00:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:00.610 13:00:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:00.610 13:00:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:00.610 13:00:19 -- bdev/bdev_raid.sh@287 -- # killprocess 116032 00:15:00.610 13:00:19 -- common/autotest_common.sh@926 -- # '[' -z 116032 ']' 00:15:00.610 13:00:19 -- common/autotest_common.sh@930 -- # kill -0 116032 00:15:00.610 13:00:19 -- common/autotest_common.sh@931 -- # uname 00:15:00.610 13:00:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:00.872 13:00:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116032 00:15:00.872 killing process with pid 116032 00:15:00.872 13:00:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:00.872 13:00:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:00.872 13:00:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116032' 00:15:00.872 13:00:19 -- common/autotest_common.sh@945 -- # kill 116032 00:15:00.872 13:00:19 -- common/autotest_common.sh@950 -- # wait 116032 00:15:00.872 [2024-06-11 13:00:19.458019] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.872 [2024-06-11 13:00:19.458153] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.863 ************************************ 00:15:01.863 END TEST raid_state_function_test_sb 00:15:01.863 ************************************ 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:01.863 00:15:01.863 real 0m10.973s 00:15:01.863 user 0m19.237s 00:15:01.863 sys 0m1.276s 00:15:01.863 13:00:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.863 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:01.863 13:00:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:01.863 13:00:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:01.863 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:15:01.863 ************************************ 00:15:01.863 START TEST raid_superblock_test 00:15:01.863 ************************************ 00:15:01.863 13:00:20 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@357 -- # raid_pid=116379 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116379 /var/tmp/spdk-raid.sock 00:15:01.863 13:00:20 -- common/autotest_common.sh@819 -- # '[' -z 116379 ']' 00:15:01.863 13:00:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:01.863 13:00:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:01.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:01.863 13:00:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:01.863 13:00:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:01.863 13:00:20 -- common/autotest_common.sh@10 -- # set +x 00:15:01.863 13:00:20 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:01.863 [2024-06-11 13:00:20.588553] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:01.863 [2024-06-11 13:00:20.588922] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116379 ] 00:15:02.122 [2024-06-11 13:00:20.736595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.122 [2024-06-11 13:00:20.911083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.380 [2024-06-11 13:00:21.098479] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.948 13:00:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:02.948 13:00:21 -- common/autotest_common.sh@852 -- # return 0 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:02.948 malloc1 00:15:02.948 13:00:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.206 [2024-06-11 13:00:21.926461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.206 [2024-06-11 13:00:21.926745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.206 [2024-06-11 13:00:21.926885] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:03.206 [2024-06-11 13:00:21.927022] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.206 [2024-06-11 13:00:21.929456] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.206 [2024-06-11 13:00:21.929658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.206 pt1 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.206 13:00:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:03.464 malloc2 00:15:03.464 13:00:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.721 [2024-06-11 13:00:22.370480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.721 [2024-06-11 13:00:22.370762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.721 [2024-06-11 13:00:22.370843] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:03.721 [2024-06-11 13:00:22.371103] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.721 [2024-06-11 13:00:22.373634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.721 [2024-06-11 13:00:22.373820] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.721 pt2 00:15:03.721 13:00:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:03.721 13:00:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:03.721 13:00:22 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:03.979 [2024-06-11 13:00:22.634766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:03.979 [2024-06-11 13:00:22.636791] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.979 [2024-06-11 13:00:22.637175] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:03.979 [2024-06-11 13:00:22.637292] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:03.979 [2024-06-11 13:00:22.637488] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:03.979 [2024-06-11 13:00:22.637892] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:03.979 [2024-06-11 13:00:22.638007] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:03.979 [2024-06-11 13:00:22.638245] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.979 13:00:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.237 13:00:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.237 "name": "raid_bdev1", 00:15:04.237 "uuid": "4cf983ce-6f8a-4908-b5d0-382adc3c06b7", 00:15:04.237 "strip_size_kb": 64, 00:15:04.237 "state": "online", 00:15:04.237 "raid_level": "concat", 00:15:04.237 "superblock": true, 00:15:04.237 "num_base_bdevs": 2, 00:15:04.237 "num_base_bdevs_discovered": 2, 00:15:04.237 "num_base_bdevs_operational": 2, 00:15:04.237 "base_bdevs_list": [ 00:15:04.237 { 00:15:04.237 "name": "pt1", 00:15:04.237 "uuid": "08328d3d-b35a-50f4-8134-a88152d0617b", 00:15:04.237 "is_configured": true, 00:15:04.237 "data_offset": 2048, 00:15:04.237 "data_size": 63488 00:15:04.237 }, 00:15:04.237 { 00:15:04.237 "name": "pt2", 00:15:04.237 "uuid": "104f6088-3610-520a-a8d2-7d1afabcfa38", 00:15:04.237 "is_configured": true, 00:15:04.237 "data_offset": 2048, 00:15:04.238 "data_size": 63488 00:15:04.238 } 00:15:04.238 ] 00:15:04.238 }' 00:15:04.238 13:00:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.238 13:00:22 -- common/autotest_common.sh@10 -- # set +x 00:15:04.805 13:00:23 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:04.806 13:00:23 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:05.064 [2024-06-11 13:00:23.727308] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.064 13:00:23 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=4cf983ce-6f8a-4908-b5d0-382adc3c06b7 00:15:05.064 13:00:23 -- bdev/bdev_raid.sh@380 -- # '[' -z 4cf983ce-6f8a-4908-b5d0-382adc3c06b7 ']' 00:15:05.064 13:00:23 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:05.325 [2024-06-11 13:00:23.971050] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.325 [2024-06-11 13:00:23.971281] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.325 [2024-06-11 13:00:23.971506] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.325 [2024-06-11 13:00:23.971661] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.325 [2024-06-11 13:00:23.971758] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:05.325 13:00:23 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.325 13:00:23 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:05.582 13:00:24 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:05.582 13:00:24 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:05.582 13:00:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.582 13:00:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:05.582 13:00:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:05.582 13:00:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:05.839 13:00:24 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:05.839 13:00:24 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:06.097 13:00:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:06.097 13:00:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:06.097 13:00:24 -- common/autotest_common.sh@640 -- # local es=0 00:15:06.097 13:00:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:06.097 13:00:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.097 13:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:06.097 13:00:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.097 13:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:06.097 13:00:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.097 13:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:06.097 13:00:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.097 13:00:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:06.097 13:00:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:06.355 [2024-06-11 13:00:25.051207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:06.355 [2024-06-11 13:00:25.053192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:06.355 [2024-06-11 13:00:25.053369] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:06.355 [2024-06-11 13:00:25.053592] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:06.355 [2024-06-11 13:00:25.053729] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.355 [2024-06-11 13:00:25.053777] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:06.355 request: 00:15:06.355 { 00:15:06.355 "name": "raid_bdev1", 00:15:06.355 "raid_level": "concat", 00:15:06.355 "base_bdevs": [ 00:15:06.355 "malloc1", 00:15:06.355 "malloc2" 00:15:06.355 ], 00:15:06.355 "superblock": false, 00:15:06.355 "strip_size_kb": 64, 00:15:06.355 "method": "bdev_raid_create", 00:15:06.355 "req_id": 1 00:15:06.355 } 00:15:06.355 Got JSON-RPC error response 00:15:06.355 response: 00:15:06.355 { 00:15:06.355 "code": -17, 00:15:06.355 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:06.355 } 00:15:06.355 13:00:25 -- common/autotest_common.sh@643 -- # es=1 00:15:06.355 13:00:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:06.355 13:00:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:06.355 13:00:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:06.355 13:00:25 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.355 13:00:25 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:06.613 13:00:25 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:06.613 13:00:25 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:06.613 13:00:25 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.613 [2024-06-11 13:00:25.431233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.613 [2024-06-11 13:00:25.431431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.614 [2024-06-11 13:00:25.431500] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:06.614 [2024-06-11 13:00:25.431746] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.614 [2024-06-11 13:00:25.434261] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.614 [2024-06-11 13:00:25.434487] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.614 [2024-06-11 13:00:25.434715] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:06.614 [2024-06-11 13:00:25.434891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.614 pt1 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.614 13:00:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.871 13:00:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.871 "name": "raid_bdev1", 00:15:06.871 "uuid": "4cf983ce-6f8a-4908-b5d0-382adc3c06b7", 00:15:06.871 "strip_size_kb": 64, 00:15:06.871 "state": "configuring", 00:15:06.871 "raid_level": "concat", 00:15:06.871 "superblock": true, 00:15:06.871 "num_base_bdevs": 2, 00:15:06.871 "num_base_bdevs_discovered": 1, 00:15:06.871 "num_base_bdevs_operational": 2, 00:15:06.871 "base_bdevs_list": [ 00:15:06.871 { 00:15:06.871 "name": "pt1", 00:15:06.871 "uuid": "08328d3d-b35a-50f4-8134-a88152d0617b", 00:15:06.871 "is_configured": true, 00:15:06.871 "data_offset": 2048, 00:15:06.871 "data_size": 63488 00:15:06.871 }, 00:15:06.871 { 00:15:06.871 "name": null, 00:15:06.871 "uuid": "104f6088-3610-520a-a8d2-7d1afabcfa38", 00:15:06.872 "is_configured": false, 00:15:06.872 "data_offset": 2048, 00:15:06.872 "data_size": 63488 00:15:06.872 } 00:15:06.872 ] 00:15:06.872 }' 00:15:06.872 13:00:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.872 13:00:25 -- common/autotest_common.sh@10 -- # set +x 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.805 [2024-06-11 13:00:26.539506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.805 [2024-06-11 13:00:26.539622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.805 [2024-06-11 13:00:26.539660] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:07.805 [2024-06-11 13:00:26.539685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.805 [2024-06-11 13:00:26.540265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.805 [2024-06-11 13:00:26.540329] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.805 [2024-06-11 13:00:26.540506] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:07.805 [2024-06-11 13:00:26.540543] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.805 [2024-06-11 13:00:26.540699] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:07.805 [2024-06-11 13:00:26.540713] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:07.805 [2024-06-11 13:00:26.540843] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:07.805 [2024-06-11 13:00:26.541166] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:07.805 [2024-06-11 13:00:26.541189] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:07.805 [2024-06-11 13:00:26.541331] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.805 pt2 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.805 13:00:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.063 13:00:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.063 "name": "raid_bdev1", 00:15:08.063 "uuid": "4cf983ce-6f8a-4908-b5d0-382adc3c06b7", 00:15:08.063 "strip_size_kb": 64, 00:15:08.063 "state": "online", 00:15:08.063 "raid_level": "concat", 00:15:08.063 "superblock": true, 00:15:08.063 "num_base_bdevs": 2, 00:15:08.063 "num_base_bdevs_discovered": 2, 00:15:08.063 "num_base_bdevs_operational": 2, 00:15:08.063 "base_bdevs_list": [ 00:15:08.063 { 00:15:08.063 "name": "pt1", 00:15:08.063 "uuid": "08328d3d-b35a-50f4-8134-a88152d0617b", 00:15:08.063 "is_configured": true, 00:15:08.063 "data_offset": 2048, 00:15:08.063 "data_size": 63488 00:15:08.063 }, 00:15:08.063 { 00:15:08.063 "name": "pt2", 00:15:08.063 "uuid": "104f6088-3610-520a-a8d2-7d1afabcfa38", 00:15:08.063 "is_configured": true, 00:15:08.063 "data_offset": 2048, 00:15:08.063 "data_size": 63488 00:15:08.063 } 00:15:08.063 ] 00:15:08.063 }' 00:15:08.063 13:00:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.063 13:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 13:00:27 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:08.645 13:00:27 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:08.907 [2024-06-11 13:00:27.707995] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.907 13:00:27 -- bdev/bdev_raid.sh@430 -- # '[' 4cf983ce-6f8a-4908-b5d0-382adc3c06b7 '!=' 4cf983ce-6f8a-4908-b5d0-382adc3c06b7 ']' 00:15:08.907 13:00:27 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:08.907 13:00:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:08.907 13:00:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:08.907 13:00:27 -- bdev/bdev_raid.sh@511 -- # killprocess 116379 00:15:08.907 13:00:27 -- common/autotest_common.sh@926 -- # '[' -z 116379 ']' 00:15:08.907 13:00:27 -- common/autotest_common.sh@930 -- # kill -0 116379 00:15:08.907 13:00:27 -- common/autotest_common.sh@931 -- # uname 00:15:08.907 13:00:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:08.907 13:00:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116379 00:15:08.907 killing process with pid 116379 00:15:08.907 13:00:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:08.907 13:00:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:08.907 13:00:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116379' 00:15:08.907 13:00:27 -- common/autotest_common.sh@945 -- # kill 116379 00:15:08.907 13:00:27 -- common/autotest_common.sh@950 -- # wait 116379 00:15:08.907 [2024-06-11 13:00:27.737351] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.907 [2024-06-11 13:00:27.737461] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.907 [2024-06-11 13:00:27.737532] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.907 [2024-06-11 13:00:27.737562] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:09.165 [2024-06-11 13:00:27.878283] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.101 ************************************ 00:15:10.102 END TEST raid_superblock_test 00:15:10.102 ************************************ 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:10.102 00:15:10.102 real 0m8.285s 00:15:10.102 user 0m14.284s 00:15:10.102 sys 0m0.976s 00:15:10.102 13:00:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.102 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:10.102 13:00:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:10.102 13:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.102 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:10.102 ************************************ 00:15:10.102 START TEST raid_state_function_test 00:15:10.102 ************************************ 00:15:10.102 13:00:28 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:10.102 13:00:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=116648 00:15:10.103 Process raid pid: 116648 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116648' 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116648 /var/tmp/spdk-raid.sock 00:15:10.103 13:00:28 -- common/autotest_common.sh@819 -- # '[' -z 116648 ']' 00:15:10.103 13:00:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:10.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:10.103 13:00:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.103 13:00:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:10.103 13:00:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.103 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:10.103 13:00:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:10.103 [2024-06-11 13:00:28.924705] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:10.104 [2024-06-11 13:00:28.925060] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.363 [2024-06-11 13:00:29.079451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.621 [2024-06-11 13:00:29.261965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.621 [2024-06-11 13:00:29.437643] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.187 13:00:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.187 13:00:29 -- common/autotest_common.sh@852 -- # return 0 00:15:11.187 13:00:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:11.445 [2024-06-11 13:00:30.110337] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.445 [2024-06-11 13:00:30.110543] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.445 [2024-06-11 13:00:30.110665] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.445 [2024-06-11 13:00:30.110771] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.445 13:00:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.701 13:00:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.701 "name": "Existed_Raid", 00:15:11.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.701 "strip_size_kb": 0, 00:15:11.701 "state": "configuring", 00:15:11.701 "raid_level": "raid1", 00:15:11.701 "superblock": false, 00:15:11.701 "num_base_bdevs": 2, 00:15:11.701 "num_base_bdevs_discovered": 0, 00:15:11.701 "num_base_bdevs_operational": 2, 00:15:11.701 "base_bdevs_list": [ 00:15:11.701 { 00:15:11.701 "name": "BaseBdev1", 00:15:11.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.701 "is_configured": false, 00:15:11.701 "data_offset": 0, 00:15:11.701 "data_size": 0 00:15:11.701 }, 00:15:11.701 { 00:15:11.701 "name": "BaseBdev2", 00:15:11.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.701 "is_configured": false, 00:15:11.701 "data_offset": 0, 00:15:11.701 "data_size": 0 00:15:11.701 } 00:15:11.701 ] 00:15:11.701 }' 00:15:11.701 13:00:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.701 13:00:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.266 13:00:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:12.524 [2024-06-11 13:00:31.282516] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.524 [2024-06-11 13:00:31.282677] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:12.524 13:00:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:12.782 [2024-06-11 13:00:31.526516] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.782 [2024-06-11 13:00:31.526729] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.782 [2024-06-11 13:00:31.526850] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.782 [2024-06-11 13:00:31.526907] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.782 13:00:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:13.039 [2024-06-11 13:00:31.759697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.039 BaseBdev1 00:15:13.039 13:00:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:13.039 13:00:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:13.039 13:00:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:13.039 13:00:31 -- common/autotest_common.sh@889 -- # local i 00:15:13.039 13:00:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:13.039 13:00:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:13.039 13:00:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:13.297 13:00:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.555 [ 00:15:13.555 { 00:15:13.555 "name": "BaseBdev1", 00:15:13.555 "aliases": [ 00:15:13.555 "68a3a3d7-c46d-43aa-88c2-8612a127986b" 00:15:13.555 ], 00:15:13.555 "product_name": "Malloc disk", 00:15:13.555 "block_size": 512, 00:15:13.555 "num_blocks": 65536, 00:15:13.555 "uuid": "68a3a3d7-c46d-43aa-88c2-8612a127986b", 00:15:13.555 "assigned_rate_limits": { 00:15:13.555 "rw_ios_per_sec": 0, 00:15:13.555 "rw_mbytes_per_sec": 0, 00:15:13.555 "r_mbytes_per_sec": 0, 00:15:13.555 "w_mbytes_per_sec": 0 00:15:13.555 }, 00:15:13.555 "claimed": true, 00:15:13.555 "claim_type": "exclusive_write", 00:15:13.555 "zoned": false, 00:15:13.555 "supported_io_types": { 00:15:13.555 "read": true, 00:15:13.555 "write": true, 00:15:13.555 "unmap": true, 00:15:13.555 "write_zeroes": true, 00:15:13.555 "flush": true, 00:15:13.555 "reset": true, 00:15:13.555 "compare": false, 00:15:13.555 "compare_and_write": false, 00:15:13.555 "abort": true, 00:15:13.555 "nvme_admin": false, 00:15:13.555 "nvme_io": false 00:15:13.555 }, 00:15:13.555 "memory_domains": [ 00:15:13.555 { 00:15:13.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.555 "dma_device_type": 2 00:15:13.555 } 00:15:13.555 ], 00:15:13.555 "driver_specific": {} 00:15:13.555 } 00:15:13.555 ] 00:15:13.555 13:00:32 -- common/autotest_common.sh@895 -- # return 0 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.555 13:00:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.814 13:00:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.814 "name": "Existed_Raid", 00:15:13.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.814 "strip_size_kb": 0, 00:15:13.814 "state": "configuring", 00:15:13.814 "raid_level": "raid1", 00:15:13.814 "superblock": false, 00:15:13.814 "num_base_bdevs": 2, 00:15:13.814 "num_base_bdevs_discovered": 1, 00:15:13.814 "num_base_bdevs_operational": 2, 00:15:13.814 "base_bdevs_list": [ 00:15:13.814 { 00:15:13.814 "name": "BaseBdev1", 00:15:13.814 "uuid": "68a3a3d7-c46d-43aa-88c2-8612a127986b", 00:15:13.814 "is_configured": true, 00:15:13.814 "data_offset": 0, 00:15:13.814 "data_size": 65536 00:15:13.814 }, 00:15:13.814 { 00:15:13.814 "name": "BaseBdev2", 00:15:13.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.814 "is_configured": false, 00:15:13.814 "data_offset": 0, 00:15:13.814 "data_size": 0 00:15:13.814 } 00:15:13.814 ] 00:15:13.814 }' 00:15:13.814 13:00:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.815 13:00:32 -- common/autotest_common.sh@10 -- # set +x 00:15:14.386 13:00:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:14.386 [2024-06-11 13:00:33.220133] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.386 [2024-06-11 13:00:33.220320] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:14.644 [2024-06-11 13:00:33.464223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.644 [2024-06-11 13:00:33.465968] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.644 [2024-06-11 13:00:33.466163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.644 13:00:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.645 13:00:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.645 13:00:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.645 13:00:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.905 13:00:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.905 "name": "Existed_Raid", 00:15:14.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.905 "strip_size_kb": 0, 00:15:14.905 "state": "configuring", 00:15:14.905 "raid_level": "raid1", 00:15:14.905 "superblock": false, 00:15:14.905 "num_base_bdevs": 2, 00:15:14.905 "num_base_bdevs_discovered": 1, 00:15:14.905 "num_base_bdevs_operational": 2, 00:15:14.905 "base_bdevs_list": [ 00:15:14.905 { 00:15:14.905 "name": "BaseBdev1", 00:15:14.905 "uuid": "68a3a3d7-c46d-43aa-88c2-8612a127986b", 00:15:14.905 "is_configured": true, 00:15:14.905 "data_offset": 0, 00:15:14.905 "data_size": 65536 00:15:14.905 }, 00:15:14.905 { 00:15:14.905 "name": "BaseBdev2", 00:15:14.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.905 "is_configured": false, 00:15:14.905 "data_offset": 0, 00:15:14.905 "data_size": 0 00:15:14.905 } 00:15:14.905 ] 00:15:14.905 }' 00:15:14.905 13:00:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.905 13:00:33 -- common/autotest_common.sh@10 -- # set +x 00:15:15.864 13:00:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.864 [2024-06-11 13:00:34.662909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.864 [2024-06-11 13:00:34.662989] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:15.864 [2024-06-11 13:00:34.663001] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:15.864 [2024-06-11 13:00:34.663123] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:15.864 [2024-06-11 13:00:34.663505] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:15.864 [2024-06-11 13:00:34.663530] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:15.864 [2024-06-11 13:00:34.663807] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.864 BaseBdev2 00:15:15.864 13:00:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:15.864 13:00:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:15.864 13:00:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:15.864 13:00:34 -- common/autotest_common.sh@889 -- # local i 00:15:15.864 13:00:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:15.864 13:00:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:15.864 13:00:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.123 13:00:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.381 [ 00:15:16.381 { 00:15:16.381 "name": "BaseBdev2", 00:15:16.381 "aliases": [ 00:15:16.381 "5d939684-fc59-4186-8dec-f481e6e6b651" 00:15:16.381 ], 00:15:16.381 "product_name": "Malloc disk", 00:15:16.381 "block_size": 512, 00:15:16.381 "num_blocks": 65536, 00:15:16.381 "uuid": "5d939684-fc59-4186-8dec-f481e6e6b651", 00:15:16.381 "assigned_rate_limits": { 00:15:16.381 "rw_ios_per_sec": 0, 00:15:16.381 "rw_mbytes_per_sec": 0, 00:15:16.381 "r_mbytes_per_sec": 0, 00:15:16.381 "w_mbytes_per_sec": 0 00:15:16.381 }, 00:15:16.381 "claimed": true, 00:15:16.381 "claim_type": "exclusive_write", 00:15:16.381 "zoned": false, 00:15:16.381 "supported_io_types": { 00:15:16.381 "read": true, 00:15:16.381 "write": true, 00:15:16.381 "unmap": true, 00:15:16.381 "write_zeroes": true, 00:15:16.381 "flush": true, 00:15:16.381 "reset": true, 00:15:16.381 "compare": false, 00:15:16.381 "compare_and_write": false, 00:15:16.381 "abort": true, 00:15:16.381 "nvme_admin": false, 00:15:16.381 "nvme_io": false 00:15:16.381 }, 00:15:16.381 "memory_domains": [ 00:15:16.381 { 00:15:16.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.381 "dma_device_type": 2 00:15:16.381 } 00:15:16.381 ], 00:15:16.381 "driver_specific": {} 00:15:16.381 } 00:15:16.381 ] 00:15:16.381 13:00:35 -- common/autotest_common.sh@895 -- # return 0 00:15:16.381 13:00:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:16.381 13:00:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:16.381 13:00:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.382 13:00:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.640 13:00:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.640 "name": "Existed_Raid", 00:15:16.640 "uuid": "a5bbe88a-aca4-4b31-823f-e7a7f05fdd34", 00:15:16.640 "strip_size_kb": 0, 00:15:16.640 "state": "online", 00:15:16.640 "raid_level": "raid1", 00:15:16.640 "superblock": false, 00:15:16.640 "num_base_bdevs": 2, 00:15:16.640 "num_base_bdevs_discovered": 2, 00:15:16.640 "num_base_bdevs_operational": 2, 00:15:16.640 "base_bdevs_list": [ 00:15:16.640 { 00:15:16.640 "name": "BaseBdev1", 00:15:16.640 "uuid": "68a3a3d7-c46d-43aa-88c2-8612a127986b", 00:15:16.640 "is_configured": true, 00:15:16.640 "data_offset": 0, 00:15:16.640 "data_size": 65536 00:15:16.640 }, 00:15:16.640 { 00:15:16.640 "name": "BaseBdev2", 00:15:16.640 "uuid": "5d939684-fc59-4186-8dec-f481e6e6b651", 00:15:16.640 "is_configured": true, 00:15:16.640 "data_offset": 0, 00:15:16.640 "data_size": 65536 00:15:16.640 } 00:15:16.640 ] 00:15:16.640 }' 00:15:16.640 13:00:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.640 13:00:35 -- common/autotest_common.sh@10 -- # set +x 00:15:17.205 13:00:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:17.464 [2024-06-11 13:00:36.227322] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.723 "name": "Existed_Raid", 00:15:17.723 "uuid": "a5bbe88a-aca4-4b31-823f-e7a7f05fdd34", 00:15:17.723 "strip_size_kb": 0, 00:15:17.723 "state": "online", 00:15:17.723 "raid_level": "raid1", 00:15:17.723 "superblock": false, 00:15:17.723 "num_base_bdevs": 2, 00:15:17.723 "num_base_bdevs_discovered": 1, 00:15:17.723 "num_base_bdevs_operational": 1, 00:15:17.723 "base_bdevs_list": [ 00:15:17.723 { 00:15:17.723 "name": null, 00:15:17.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.723 "is_configured": false, 00:15:17.723 "data_offset": 0, 00:15:17.723 "data_size": 65536 00:15:17.723 }, 00:15:17.723 { 00:15:17.723 "name": "BaseBdev2", 00:15:17.723 "uuid": "5d939684-fc59-4186-8dec-f481e6e6b651", 00:15:17.723 "is_configured": true, 00:15:17.723 "data_offset": 0, 00:15:17.723 "data_size": 65536 00:15:17.723 } 00:15:17.723 ] 00:15:17.723 }' 00:15:17.723 13:00:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.723 13:00:36 -- common/autotest_common.sh@10 -- # set +x 00:15:18.658 13:00:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:18.658 13:00:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:18.658 13:00:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.658 13:00:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:18.658 13:00:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:18.658 13:00:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.658 13:00:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:18.917 [2024-06-11 13:00:37.537110] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.917 [2024-06-11 13:00:37.537142] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.917 [2024-06-11 13:00:37.537197] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.917 [2024-06-11 13:00:37.608922] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.917 [2024-06-11 13:00:37.608959] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:18.917 13:00:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:18.917 13:00:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:18.917 13:00:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.917 13:00:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.174 13:00:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:19.174 13:00:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:19.174 13:00:37 -- bdev/bdev_raid.sh@287 -- # killprocess 116648 00:15:19.174 13:00:37 -- common/autotest_common.sh@926 -- # '[' -z 116648 ']' 00:15:19.174 13:00:37 -- common/autotest_common.sh@930 -- # kill -0 116648 00:15:19.174 13:00:37 -- common/autotest_common.sh@931 -- # uname 00:15:19.174 13:00:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:19.174 13:00:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116648 00:15:19.174 killing process with pid 116648 00:15:19.174 13:00:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:19.174 13:00:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:19.174 13:00:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116648' 00:15:19.174 13:00:37 -- common/autotest_common.sh@945 -- # kill 116648 00:15:19.174 13:00:37 -- common/autotest_common.sh@950 -- # wait 116648 00:15:19.174 [2024-06-11 13:00:37.873977] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.174 [2024-06-11 13:00:37.874129] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.107 ************************************ 00:15:20.107 END TEST raid_state_function_test 00:15:20.107 ************************************ 00:15:20.107 13:00:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:20.107 00:15:20.107 real 0m9.966s 00:15:20.107 user 0m17.589s 00:15:20.107 sys 0m1.073s 00:15:20.107 13:00:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.107 13:00:38 -- common/autotest_common.sh@10 -- # set +x 00:15:20.107 13:00:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:20.107 13:00:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:20.108 13:00:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:20.108 13:00:38 -- common/autotest_common.sh@10 -- # set +x 00:15:20.108 ************************************ 00:15:20.108 START TEST raid_state_function_test_sb 00:15:20.108 ************************************ 00:15:20.108 13:00:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=116987 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:20.108 Process raid pid: 116987 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116987' 00:15:20.108 13:00:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116987 /var/tmp/spdk-raid.sock 00:15:20.108 13:00:38 -- common/autotest_common.sh@819 -- # '[' -z 116987 ']' 00:15:20.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:20.108 13:00:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:20.108 13:00:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:20.108 13:00:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:20.108 13:00:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:20.108 13:00:38 -- common/autotest_common.sh@10 -- # set +x 00:15:20.367 [2024-06-11 13:00:38.947204] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:20.367 [2024-06-11 13:00:38.947392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.367 [2024-06-11 13:00:39.097463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.626 [2024-06-11 13:00:39.281186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.626 [2024-06-11 13:00:39.461455] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.193 13:00:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:21.193 13:00:39 -- common/autotest_common.sh@852 -- # return 0 00:15:21.193 13:00:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:21.450 [2024-06-11 13:00:40.129552] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.450 [2024-06-11 13:00:40.129634] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.450 [2024-06-11 13:00:40.129663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.450 [2024-06-11 13:00:40.129683] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.450 13:00:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.708 13:00:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.708 "name": "Existed_Raid", 00:15:21.708 "uuid": "7cbe6bab-a3c0-46c7-b373-4a9bf70c37a4", 00:15:21.708 "strip_size_kb": 0, 00:15:21.708 "state": "configuring", 00:15:21.708 "raid_level": "raid1", 00:15:21.708 "superblock": true, 00:15:21.708 "num_base_bdevs": 2, 00:15:21.708 "num_base_bdevs_discovered": 0, 00:15:21.708 "num_base_bdevs_operational": 2, 00:15:21.708 "base_bdevs_list": [ 00:15:21.708 { 00:15:21.708 "name": "BaseBdev1", 00:15:21.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.708 "is_configured": false, 00:15:21.708 "data_offset": 0, 00:15:21.708 "data_size": 0 00:15:21.708 }, 00:15:21.708 { 00:15:21.708 "name": "BaseBdev2", 00:15:21.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.708 "is_configured": false, 00:15:21.708 "data_offset": 0, 00:15:21.708 "data_size": 0 00:15:21.708 } 00:15:21.708 ] 00:15:21.708 }' 00:15:21.708 13:00:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.708 13:00:40 -- common/autotest_common.sh@10 -- # set +x 00:15:22.275 13:00:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:22.533 [2024-06-11 13:00:41.229670] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.533 [2024-06-11 13:00:41.229727] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:22.533 13:00:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.790 [2024-06-11 13:00:41.469786] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.790 [2024-06-11 13:00:41.469886] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.790 [2024-06-11 13:00:41.469915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.790 [2024-06-11 13:00:41.469940] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.790 13:00:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.048 [2024-06-11 13:00:41.706766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.048 BaseBdev1 00:15:23.048 13:00:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:23.048 13:00:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:23.048 13:00:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:23.048 13:00:41 -- common/autotest_common.sh@889 -- # local i 00:15:23.048 13:00:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:23.048 13:00:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:23.048 13:00:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.307 13:00:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.564 [ 00:15:23.564 { 00:15:23.564 "name": "BaseBdev1", 00:15:23.564 "aliases": [ 00:15:23.564 "9ee99c5b-1777-4151-9e22-3862ad74db21" 00:15:23.564 ], 00:15:23.564 "product_name": "Malloc disk", 00:15:23.564 "block_size": 512, 00:15:23.564 "num_blocks": 65536, 00:15:23.564 "uuid": "9ee99c5b-1777-4151-9e22-3862ad74db21", 00:15:23.564 "assigned_rate_limits": { 00:15:23.564 "rw_ios_per_sec": 0, 00:15:23.564 "rw_mbytes_per_sec": 0, 00:15:23.564 "r_mbytes_per_sec": 0, 00:15:23.564 "w_mbytes_per_sec": 0 00:15:23.564 }, 00:15:23.564 "claimed": true, 00:15:23.564 "claim_type": "exclusive_write", 00:15:23.564 "zoned": false, 00:15:23.564 "supported_io_types": { 00:15:23.564 "read": true, 00:15:23.564 "write": true, 00:15:23.564 "unmap": true, 00:15:23.564 "write_zeroes": true, 00:15:23.564 "flush": true, 00:15:23.564 "reset": true, 00:15:23.564 "compare": false, 00:15:23.564 "compare_and_write": false, 00:15:23.564 "abort": true, 00:15:23.564 "nvme_admin": false, 00:15:23.564 "nvme_io": false 00:15:23.564 }, 00:15:23.564 "memory_domains": [ 00:15:23.564 { 00:15:23.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.564 "dma_device_type": 2 00:15:23.564 } 00:15:23.564 ], 00:15:23.564 "driver_specific": {} 00:15:23.564 } 00:15:23.564 ] 00:15:23.564 13:00:42 -- common/autotest_common.sh@895 -- # return 0 00:15:23.564 13:00:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.565 13:00:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.822 13:00:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.822 "name": "Existed_Raid", 00:15:23.822 "uuid": "b88ef9a2-e53a-4bb9-add3-f807e7cca67c", 00:15:23.822 "strip_size_kb": 0, 00:15:23.822 "state": "configuring", 00:15:23.822 "raid_level": "raid1", 00:15:23.822 "superblock": true, 00:15:23.822 "num_base_bdevs": 2, 00:15:23.823 "num_base_bdevs_discovered": 1, 00:15:23.823 "num_base_bdevs_operational": 2, 00:15:23.823 "base_bdevs_list": [ 00:15:23.823 { 00:15:23.823 "name": "BaseBdev1", 00:15:23.823 "uuid": "9ee99c5b-1777-4151-9e22-3862ad74db21", 00:15:23.823 "is_configured": true, 00:15:23.823 "data_offset": 2048, 00:15:23.823 "data_size": 63488 00:15:23.823 }, 00:15:23.823 { 00:15:23.823 "name": "BaseBdev2", 00:15:23.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.823 "is_configured": false, 00:15:23.823 "data_offset": 0, 00:15:23.823 "data_size": 0 00:15:23.823 } 00:15:23.823 ] 00:15:23.823 }' 00:15:23.823 13:00:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.823 13:00:42 -- common/autotest_common.sh@10 -- # set +x 00:15:24.396 13:00:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:24.396 [2024-06-11 13:00:43.215146] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.396 [2024-06-11 13:00:43.215205] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:24.396 13:00:43 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:24.396 13:00:43 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:24.963 13:00:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.963 BaseBdev1 00:15:24.963 13:00:43 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:24.963 13:00:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:24.963 13:00:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:24.963 13:00:43 -- common/autotest_common.sh@889 -- # local i 00:15:24.963 13:00:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:24.963 13:00:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:24.963 13:00:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.222 13:00:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.480 [ 00:15:25.480 { 00:15:25.480 "name": "BaseBdev1", 00:15:25.480 "aliases": [ 00:15:25.480 "8721d82a-6a73-426e-8587-5bf6ee650625" 00:15:25.480 ], 00:15:25.480 "product_name": "Malloc disk", 00:15:25.480 "block_size": 512, 00:15:25.480 "num_blocks": 65536, 00:15:25.480 "uuid": "8721d82a-6a73-426e-8587-5bf6ee650625", 00:15:25.480 "assigned_rate_limits": { 00:15:25.480 "rw_ios_per_sec": 0, 00:15:25.480 "rw_mbytes_per_sec": 0, 00:15:25.480 "r_mbytes_per_sec": 0, 00:15:25.480 "w_mbytes_per_sec": 0 00:15:25.480 }, 00:15:25.480 "claimed": false, 00:15:25.480 "zoned": false, 00:15:25.480 "supported_io_types": { 00:15:25.480 "read": true, 00:15:25.480 "write": true, 00:15:25.480 "unmap": true, 00:15:25.480 "write_zeroes": true, 00:15:25.480 "flush": true, 00:15:25.480 "reset": true, 00:15:25.480 "compare": false, 00:15:25.480 "compare_and_write": false, 00:15:25.480 "abort": true, 00:15:25.480 "nvme_admin": false, 00:15:25.480 "nvme_io": false 00:15:25.480 }, 00:15:25.480 "memory_domains": [ 00:15:25.480 { 00:15:25.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.480 "dma_device_type": 2 00:15:25.480 } 00:15:25.480 ], 00:15:25.480 "driver_specific": {} 00:15:25.480 } 00:15:25.480 ] 00:15:25.480 13:00:44 -- common/autotest_common.sh@895 -- # return 0 00:15:25.480 13:00:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:25.739 [2024-06-11 13:00:44.339965] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.739 [2024-06-11 13:00:44.341724] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.739 [2024-06-11 13:00:44.341801] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.739 13:00:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.998 13:00:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.998 "name": "Existed_Raid", 00:15:25.998 "uuid": "fca76f1b-41ef-489f-8fdf-b2daae7d2154", 00:15:25.998 "strip_size_kb": 0, 00:15:25.998 "state": "configuring", 00:15:25.998 "raid_level": "raid1", 00:15:25.998 "superblock": true, 00:15:25.998 "num_base_bdevs": 2, 00:15:25.998 "num_base_bdevs_discovered": 1, 00:15:25.998 "num_base_bdevs_operational": 2, 00:15:25.998 "base_bdevs_list": [ 00:15:25.998 { 00:15:25.998 "name": "BaseBdev1", 00:15:25.998 "uuid": "8721d82a-6a73-426e-8587-5bf6ee650625", 00:15:25.998 "is_configured": true, 00:15:25.998 "data_offset": 2048, 00:15:25.998 "data_size": 63488 00:15:25.998 }, 00:15:25.998 { 00:15:25.998 "name": "BaseBdev2", 00:15:25.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.998 "is_configured": false, 00:15:25.998 "data_offset": 0, 00:15:25.998 "data_size": 0 00:15:25.998 } 00:15:25.998 ] 00:15:25.998 }' 00:15:25.998 13:00:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.998 13:00:44 -- common/autotest_common.sh@10 -- # set +x 00:15:26.564 13:00:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.822 [2024-06-11 13:00:45.522511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.822 [2024-06-11 13:00:45.522721] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:26.822 [2024-06-11 13:00:45.522736] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:26.822 [2024-06-11 13:00:45.522877] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:26.822 [2024-06-11 13:00:45.523223] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:26.822 [2024-06-11 13:00:45.523244] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:26.822 [2024-06-11 13:00:45.523415] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.822 BaseBdev2 00:15:26.822 13:00:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:26.822 13:00:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:26.822 13:00:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:26.822 13:00:45 -- common/autotest_common.sh@889 -- # local i 00:15:26.822 13:00:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:26.822 13:00:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:26.822 13:00:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.080 13:00:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.338 [ 00:15:27.338 { 00:15:27.338 "name": "BaseBdev2", 00:15:27.338 "aliases": [ 00:15:27.338 "f57d2ec9-53c7-467b-b459-643d6de5e27d" 00:15:27.338 ], 00:15:27.338 "product_name": "Malloc disk", 00:15:27.338 "block_size": 512, 00:15:27.339 "num_blocks": 65536, 00:15:27.339 "uuid": "f57d2ec9-53c7-467b-b459-643d6de5e27d", 00:15:27.339 "assigned_rate_limits": { 00:15:27.339 "rw_ios_per_sec": 0, 00:15:27.339 "rw_mbytes_per_sec": 0, 00:15:27.339 "r_mbytes_per_sec": 0, 00:15:27.339 "w_mbytes_per_sec": 0 00:15:27.339 }, 00:15:27.339 "claimed": true, 00:15:27.339 "claim_type": "exclusive_write", 00:15:27.339 "zoned": false, 00:15:27.339 "supported_io_types": { 00:15:27.339 "read": true, 00:15:27.339 "write": true, 00:15:27.339 "unmap": true, 00:15:27.339 "write_zeroes": true, 00:15:27.339 "flush": true, 00:15:27.339 "reset": true, 00:15:27.339 "compare": false, 00:15:27.339 "compare_and_write": false, 00:15:27.339 "abort": true, 00:15:27.339 "nvme_admin": false, 00:15:27.339 "nvme_io": false 00:15:27.339 }, 00:15:27.339 "memory_domains": [ 00:15:27.339 { 00:15:27.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.339 "dma_device_type": 2 00:15:27.339 } 00:15:27.339 ], 00:15:27.339 "driver_specific": {} 00:15:27.339 } 00:15:27.339 ] 00:15:27.339 13:00:45 -- common/autotest_common.sh@895 -- # return 0 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.339 13:00:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.339 13:00:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.339 "name": "Existed_Raid", 00:15:27.339 "uuid": "fca76f1b-41ef-489f-8fdf-b2daae7d2154", 00:15:27.339 "strip_size_kb": 0, 00:15:27.339 "state": "online", 00:15:27.339 "raid_level": "raid1", 00:15:27.339 "superblock": true, 00:15:27.339 "num_base_bdevs": 2, 00:15:27.339 "num_base_bdevs_discovered": 2, 00:15:27.339 "num_base_bdevs_operational": 2, 00:15:27.339 "base_bdevs_list": [ 00:15:27.339 { 00:15:27.339 "name": "BaseBdev1", 00:15:27.339 "uuid": "8721d82a-6a73-426e-8587-5bf6ee650625", 00:15:27.339 "is_configured": true, 00:15:27.339 "data_offset": 2048, 00:15:27.339 "data_size": 63488 00:15:27.339 }, 00:15:27.339 { 00:15:27.339 "name": "BaseBdev2", 00:15:27.339 "uuid": "f57d2ec9-53c7-467b-b459-643d6de5e27d", 00:15:27.339 "is_configured": true, 00:15:27.339 "data_offset": 2048, 00:15:27.339 "data_size": 63488 00:15:27.339 } 00:15:27.339 ] 00:15:27.339 }' 00:15:27.339 13:00:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.339 13:00:46 -- common/autotest_common.sh@10 -- # set +x 00:15:28.275 13:00:46 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:28.275 [2024-06-11 13:00:46.994964] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.275 13:00:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.534 13:00:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.534 "name": "Existed_Raid", 00:15:28.534 "uuid": "fca76f1b-41ef-489f-8fdf-b2daae7d2154", 00:15:28.534 "strip_size_kb": 0, 00:15:28.534 "state": "online", 00:15:28.534 "raid_level": "raid1", 00:15:28.534 "superblock": true, 00:15:28.534 "num_base_bdevs": 2, 00:15:28.534 "num_base_bdevs_discovered": 1, 00:15:28.534 "num_base_bdevs_operational": 1, 00:15:28.534 "base_bdevs_list": [ 00:15:28.534 { 00:15:28.534 "name": null, 00:15:28.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.534 "is_configured": false, 00:15:28.534 "data_offset": 2048, 00:15:28.534 "data_size": 63488 00:15:28.534 }, 00:15:28.534 { 00:15:28.534 "name": "BaseBdev2", 00:15:28.534 "uuid": "f57d2ec9-53c7-467b-b459-643d6de5e27d", 00:15:28.534 "is_configured": true, 00:15:28.534 "data_offset": 2048, 00:15:28.534 "data_size": 63488 00:15:28.534 } 00:15:28.534 ] 00:15:28.534 }' 00:15:28.534 13:00:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.534 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:15:29.469 13:00:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:29.469 13:00:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:29.469 13:00:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.469 13:00:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:29.469 13:00:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:29.469 13:00:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.469 13:00:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:29.728 [2024-06-11 13:00:48.389255] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.728 [2024-06-11 13:00:48.389289] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.728 [2024-06-11 13:00:48.389368] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.728 [2024-06-11 13:00:48.457592] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.728 [2024-06-11 13:00:48.457625] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:29.728 13:00:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:29.728 13:00:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:29.728 13:00:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.728 13:00:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:29.987 13:00:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:29.987 13:00:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:29.987 13:00:48 -- bdev/bdev_raid.sh@287 -- # killprocess 116987 00:15:29.987 13:00:48 -- common/autotest_common.sh@926 -- # '[' -z 116987 ']' 00:15:29.987 13:00:48 -- common/autotest_common.sh@930 -- # kill -0 116987 00:15:29.987 13:00:48 -- common/autotest_common.sh@931 -- # uname 00:15:29.987 13:00:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:29.987 13:00:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116987 00:15:29.987 13:00:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:29.987 killing process with pid 116987 00:15:29.987 13:00:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:29.987 13:00:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116987' 00:15:29.987 13:00:48 -- common/autotest_common.sh@945 -- # kill 116987 00:15:29.987 13:00:48 -- common/autotest_common.sh@950 -- # wait 116987 00:15:29.987 [2024-06-11 13:00:48.757776] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.987 [2024-06-11 13:00:48.757932] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.921 13:00:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:30.921 00:15:30.921 real 0m10.834s 00:15:30.921 user 0m19.082s 00:15:30.921 sys 0m1.210s 00:15:30.921 13:00:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.921 ************************************ 00:15:30.922 END TEST raid_state_function_test_sb 00:15:30.922 ************************************ 00:15:30.922 13:00:49 -- common/autotest_common.sh@10 -- # set +x 00:15:30.922 13:00:49 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:30.922 13:00:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:30.922 13:00:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:30.922 13:00:49 -- common/autotest_common.sh@10 -- # set +x 00:15:31.180 ************************************ 00:15:31.180 START TEST raid_superblock_test 00:15:31.180 ************************************ 00:15:31.180 13:00:49 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=117331 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117331 /var/tmp/spdk-raid.sock 00:15:31.180 13:00:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:31.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:31.180 13:00:49 -- common/autotest_common.sh@819 -- # '[' -z 117331 ']' 00:15:31.180 13:00:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:31.180 13:00:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:31.180 13:00:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:31.180 13:00:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:31.180 13:00:49 -- common/autotest_common.sh@10 -- # set +x 00:15:31.180 [2024-06-11 13:00:49.835834] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:31.180 [2024-06-11 13:00:49.836046] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117331 ] 00:15:31.180 [2024-06-11 13:00:50.002838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.437 [2024-06-11 13:00:50.184648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.695 [2024-06-11 13:00:50.354894] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.953 13:00:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:31.954 13:00:50 -- common/autotest_common.sh@852 -- # return 0 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.954 13:00:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:32.522 malloc1 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:32.522 [2024-06-11 13:00:51.242772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:32.522 [2024-06-11 13:00:51.242861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.522 [2024-06-11 13:00:51.242891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:32.522 [2024-06-11 13:00:51.242933] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.522 [2024-06-11 13:00:51.244928] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.522 [2024-06-11 13:00:51.244972] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:32.522 pt1 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:32.522 13:00:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:32.780 malloc2 00:15:32.780 13:00:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.039 [2024-06-11 13:00:51.660447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.039 [2024-06-11 13:00:51.660541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.039 [2024-06-11 13:00:51.660580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:33.039 [2024-06-11 13:00:51.660629] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.039 [2024-06-11 13:00:51.662564] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.039 [2024-06-11 13:00:51.662625] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.039 pt2 00:15:33.039 13:00:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:33.039 13:00:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:33.039 13:00:51 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:33.297 [2024-06-11 13:00:51.880614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:33.297 [2024-06-11 13:00:51.882563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.298 [2024-06-11 13:00:51.882756] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:33.298 [2024-06-11 13:00:51.882770] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:33.298 [2024-06-11 13:00:51.882917] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:33.298 [2024-06-11 13:00:51.883271] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:33.298 [2024-06-11 13:00:51.883295] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:33.298 [2024-06-11 13:00:51.883449] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.298 13:00:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.555 13:00:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.555 "name": "raid_bdev1", 00:15:33.555 "uuid": "a157c298-a3d5-400c-a449-840a25242384", 00:15:33.555 "strip_size_kb": 0, 00:15:33.555 "state": "online", 00:15:33.555 "raid_level": "raid1", 00:15:33.555 "superblock": true, 00:15:33.555 "num_base_bdevs": 2, 00:15:33.555 "num_base_bdevs_discovered": 2, 00:15:33.555 "num_base_bdevs_operational": 2, 00:15:33.555 "base_bdevs_list": [ 00:15:33.555 { 00:15:33.555 "name": "pt1", 00:15:33.555 "uuid": "69b2dfe7-a3c1-558b-94ac-51b22eb49687", 00:15:33.555 "is_configured": true, 00:15:33.555 "data_offset": 2048, 00:15:33.555 "data_size": 63488 00:15:33.555 }, 00:15:33.555 { 00:15:33.555 "name": "pt2", 00:15:33.555 "uuid": "a38cfff7-c4a1-52ae-870f-7eb939ace9d8", 00:15:33.555 "is_configured": true, 00:15:33.555 "data_offset": 2048, 00:15:33.555 "data_size": 63488 00:15:33.555 } 00:15:33.555 ] 00:15:33.555 }' 00:15:33.555 13:00:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.555 13:00:52 -- common/autotest_common.sh@10 -- # set +x 00:15:34.121 13:00:52 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:34.121 13:00:52 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:34.121 [2024-06-11 13:00:52.953124] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.379 13:00:52 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=a157c298-a3d5-400c-a449-840a25242384 00:15:34.379 13:00:52 -- bdev/bdev_raid.sh@380 -- # '[' -z a157c298-a3d5-400c-a449-840a25242384 ']' 00:15:34.379 13:00:52 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:34.379 [2024-06-11 13:00:53.216974] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.379 [2024-06-11 13:00:53.217018] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.379 [2024-06-11 13:00:53.217101] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.379 [2024-06-11 13:00:53.217165] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.379 [2024-06-11 13:00:53.217177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:34.637 13:00:53 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.637 13:00:53 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:34.637 13:00:53 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:34.637 13:00:53 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:34.637 13:00:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.637 13:00:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:34.895 13:00:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.895 13:00:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:35.152 13:00:53 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:35.152 13:00:53 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:35.410 13:00:54 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:35.410 13:00:54 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:35.410 13:00:54 -- common/autotest_common.sh@640 -- # local es=0 00:15:35.410 13:00:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:35.410 13:00:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.410 13:00:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.410 13:00:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.410 13:00:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.411 13:00:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.411 13:00:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:35.411 13:00:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.411 13:00:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:35.411 13:00:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:35.669 [2024-06-11 13:00:54.289211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:35.669 [2024-06-11 13:00:54.290978] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:35.669 [2024-06-11 13:00:54.291049] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:35.669 [2024-06-11 13:00:54.291132] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:35.669 [2024-06-11 13:00:54.291168] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.669 [2024-06-11 13:00:54.291179] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:35.669 request: 00:15:35.669 { 00:15:35.669 "name": "raid_bdev1", 00:15:35.669 "raid_level": "raid1", 00:15:35.669 "base_bdevs": [ 00:15:35.669 "malloc1", 00:15:35.669 "malloc2" 00:15:35.669 ], 00:15:35.669 "superblock": false, 00:15:35.669 "method": "bdev_raid_create", 00:15:35.669 "req_id": 1 00:15:35.669 } 00:15:35.669 Got JSON-RPC error response 00:15:35.669 response: 00:15:35.669 { 00:15:35.669 "code": -17, 00:15:35.669 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:35.669 } 00:15:35.669 13:00:54 -- common/autotest_common.sh@643 -- # es=1 00:15:35.669 13:00:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:35.669 13:00:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:35.669 13:00:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:35.669 13:00:54 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.669 13:00:54 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:35.927 13:00:54 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:35.927 13:00:54 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.928 [2024-06-11 13:00:54.741233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.928 [2024-06-11 13:00:54.741358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.928 [2024-06-11 13:00:54.741397] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:35.928 [2024-06-11 13:00:54.741422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.928 [2024-06-11 13:00:54.743852] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.928 [2024-06-11 13:00:54.743919] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.928 [2024-06-11 13:00:54.744032] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:35.928 [2024-06-11 13:00:54.744099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.928 pt1 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.928 13:00:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.186 13:00:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.186 "name": "raid_bdev1", 00:15:36.186 "uuid": "a157c298-a3d5-400c-a449-840a25242384", 00:15:36.186 "strip_size_kb": 0, 00:15:36.186 "state": "configuring", 00:15:36.186 "raid_level": "raid1", 00:15:36.186 "superblock": true, 00:15:36.186 "num_base_bdevs": 2, 00:15:36.186 "num_base_bdevs_discovered": 1, 00:15:36.186 "num_base_bdevs_operational": 2, 00:15:36.186 "base_bdevs_list": [ 00:15:36.186 { 00:15:36.186 "name": "pt1", 00:15:36.186 "uuid": "69b2dfe7-a3c1-558b-94ac-51b22eb49687", 00:15:36.186 "is_configured": true, 00:15:36.186 "data_offset": 2048, 00:15:36.186 "data_size": 63488 00:15:36.186 }, 00:15:36.186 { 00:15:36.186 "name": null, 00:15:36.186 "uuid": "a38cfff7-c4a1-52ae-870f-7eb939ace9d8", 00:15:36.186 "is_configured": false, 00:15:36.186 "data_offset": 2048, 00:15:36.186 "data_size": 63488 00:15:36.186 } 00:15:36.186 ] 00:15:36.186 }' 00:15:36.186 13:00:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.186 13:00:54 -- common/autotest_common.sh@10 -- # set +x 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.138 [2024-06-11 13:00:55.805504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.138 [2024-06-11 13:00:55.805656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.138 [2024-06-11 13:00:55.805694] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:37.138 [2024-06-11 13:00:55.805719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.138 [2024-06-11 13:00:55.806255] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.138 [2024-06-11 13:00:55.806335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.138 [2024-06-11 13:00:55.806463] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:37.138 [2024-06-11 13:00:55.806491] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.138 [2024-06-11 13:00:55.806662] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:37.138 [2024-06-11 13:00:55.806676] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:37.138 [2024-06-11 13:00:55.806802] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:37.138 [2024-06-11 13:00:55.807114] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:37.138 [2024-06-11 13:00:55.807139] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:37.138 [2024-06-11 13:00:55.807302] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.138 pt2 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.138 13:00:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.396 13:00:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.396 "name": "raid_bdev1", 00:15:37.396 "uuid": "a157c298-a3d5-400c-a449-840a25242384", 00:15:37.396 "strip_size_kb": 0, 00:15:37.396 "state": "online", 00:15:37.396 "raid_level": "raid1", 00:15:37.396 "superblock": true, 00:15:37.396 "num_base_bdevs": 2, 00:15:37.396 "num_base_bdevs_discovered": 2, 00:15:37.396 "num_base_bdevs_operational": 2, 00:15:37.396 "base_bdevs_list": [ 00:15:37.396 { 00:15:37.396 "name": "pt1", 00:15:37.396 "uuid": "69b2dfe7-a3c1-558b-94ac-51b22eb49687", 00:15:37.396 "is_configured": true, 00:15:37.396 "data_offset": 2048, 00:15:37.396 "data_size": 63488 00:15:37.396 }, 00:15:37.396 { 00:15:37.396 "name": "pt2", 00:15:37.396 "uuid": "a38cfff7-c4a1-52ae-870f-7eb939ace9d8", 00:15:37.396 "is_configured": true, 00:15:37.396 "data_offset": 2048, 00:15:37.396 "data_size": 63488 00:15:37.396 } 00:15:37.396 ] 00:15:37.396 }' 00:15:37.396 13:00:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.396 13:00:56 -- common/autotest_common.sh@10 -- # set +x 00:15:37.962 13:00:56 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:37.962 13:00:56 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:38.220 [2024-06-11 13:00:56.969962] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.220 13:00:56 -- bdev/bdev_raid.sh@430 -- # '[' a157c298-a3d5-400c-a449-840a25242384 '!=' a157c298-a3d5-400c-a449-840a25242384 ']' 00:15:38.220 13:00:56 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:38.220 13:00:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:38.220 13:00:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:38.220 13:00:56 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:38.476 [2024-06-11 13:00:57.165905] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.476 13:00:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.734 13:00:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.734 "name": "raid_bdev1", 00:15:38.734 "uuid": "a157c298-a3d5-400c-a449-840a25242384", 00:15:38.734 "strip_size_kb": 0, 00:15:38.734 "state": "online", 00:15:38.734 "raid_level": "raid1", 00:15:38.734 "superblock": true, 00:15:38.734 "num_base_bdevs": 2, 00:15:38.734 "num_base_bdevs_discovered": 1, 00:15:38.734 "num_base_bdevs_operational": 1, 00:15:38.734 "base_bdevs_list": [ 00:15:38.734 { 00:15:38.734 "name": null, 00:15:38.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.734 "is_configured": false, 00:15:38.734 "data_offset": 2048, 00:15:38.734 "data_size": 63488 00:15:38.734 }, 00:15:38.734 { 00:15:38.734 "name": "pt2", 00:15:38.734 "uuid": "a38cfff7-c4a1-52ae-870f-7eb939ace9d8", 00:15:38.734 "is_configured": true, 00:15:38.734 "data_offset": 2048, 00:15:38.734 "data_size": 63488 00:15:38.734 } 00:15:38.734 ] 00:15:38.734 }' 00:15:38.734 13:00:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.734 13:00:57 -- common/autotest_common.sh@10 -- # set +x 00:15:39.300 13:00:57 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:39.558 [2024-06-11 13:00:58.254168] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.558 [2024-06-11 13:00:58.254201] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.558 [2024-06-11 13:00:58.254300] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.558 [2024-06-11 13:00:58.254354] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.558 [2024-06-11 13:00:58.254365] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:39.558 13:00:58 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.558 13:00:58 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:39.815 13:00:58 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:39.815 13:00:58 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:39.815 13:00:58 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:39.815 13:00:58 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:39.815 13:00:58 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:40.073 13:00:58 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:40.073 13:00:58 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:40.073 13:00:58 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:40.074 13:00:58 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:40.074 13:00:58 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:40.074 13:00:58 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.074 [2024-06-11 13:00:58.902275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.074 [2024-06-11 13:00:58.902395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.074 [2024-06-11 13:00:58.902427] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:40.074 [2024-06-11 13:00:58.902457] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.074 [2024-06-11 13:00:58.904651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.074 [2024-06-11 13:00:58.904720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.074 [2024-06-11 13:00:58.904865] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:40.074 [2024-06-11 13:00:58.904985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.074 [2024-06-11 13:00:58.905097] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:15:40.074 [2024-06-11 13:00:58.905112] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:40.074 [2024-06-11 13:00:58.905213] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:15:40.074 [2024-06-11 13:00:58.905542] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:15:40.074 [2024-06-11 13:00:58.905565] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:15:40.074 [2024-06-11 13:00:58.905714] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.074 pt2 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.334 13:00:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.334 13:00:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.334 "name": "raid_bdev1", 00:15:40.334 "uuid": "a157c298-a3d5-400c-a449-840a25242384", 00:15:40.334 "strip_size_kb": 0, 00:15:40.334 "state": "online", 00:15:40.334 "raid_level": "raid1", 00:15:40.334 "superblock": true, 00:15:40.334 "num_base_bdevs": 2, 00:15:40.334 "num_base_bdevs_discovered": 1, 00:15:40.334 "num_base_bdevs_operational": 1, 00:15:40.334 "base_bdevs_list": [ 00:15:40.334 { 00:15:40.334 "name": null, 00:15:40.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.334 "is_configured": false, 00:15:40.334 "data_offset": 2048, 00:15:40.334 "data_size": 63488 00:15:40.334 }, 00:15:40.334 { 00:15:40.334 "name": "pt2", 00:15:40.334 "uuid": "a38cfff7-c4a1-52ae-870f-7eb939ace9d8", 00:15:40.334 "is_configured": true, 00:15:40.334 "data_offset": 2048, 00:15:40.334 "data_size": 63488 00:15:40.334 } 00:15:40.334 ] 00:15:40.334 }' 00:15:40.334 13:00:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.334 13:00:59 -- common/autotest_common.sh@10 -- # set +x 00:15:41.273 13:00:59 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:41.273 13:00:59 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:41.273 13:00:59 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:41.273 [2024-06-11 13:00:59.950265] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.273 13:00:59 -- bdev/bdev_raid.sh@506 -- # '[' a157c298-a3d5-400c-a449-840a25242384 '!=' a157c298-a3d5-400c-a449-840a25242384 ']' 00:15:41.273 13:00:59 -- bdev/bdev_raid.sh@511 -- # killprocess 117331 00:15:41.273 13:00:59 -- common/autotest_common.sh@926 -- # '[' -z 117331 ']' 00:15:41.273 13:00:59 -- common/autotest_common.sh@930 -- # kill -0 117331 00:15:41.273 13:00:59 -- common/autotest_common.sh@931 -- # uname 00:15:41.273 13:00:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:41.273 13:00:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117331 00:15:41.273 13:00:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:41.273 killing process with pid 117331 00:15:41.273 13:00:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:41.273 13:00:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117331' 00:15:41.273 13:00:59 -- common/autotest_common.sh@945 -- # kill 117331 00:15:41.273 13:00:59 -- common/autotest_common.sh@950 -- # wait 117331 00:15:41.273 [2024-06-11 13:00:59.985697] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.273 [2024-06-11 13:00:59.985774] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.273 [2024-06-11 13:00:59.985884] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.273 [2024-06-11 13:00:59.985906] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:15:41.532 [2024-06-11 13:01:00.118911] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:42.468 00:15:42.468 real 0m11.275s 00:15:42.468 user 0m20.266s 00:15:42.468 sys 0m1.306s 00:15:42.468 13:01:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.468 13:01:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.468 ************************************ 00:15:42.468 END TEST raid_superblock_test 00:15:42.468 ************************************ 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:42.468 13:01:01 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:42.468 13:01:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:42.468 13:01:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.468 ************************************ 00:15:42.468 START TEST raid_state_function_test 00:15:42.468 ************************************ 00:15:42.468 13:01:01 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:42.468 13:01:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=117700 00:15:42.469 Process raid pid: 117700 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117700' 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117700 /var/tmp/spdk-raid.sock 00:15:42.469 13:01:01 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:42.469 13:01:01 -- common/autotest_common.sh@819 -- # '[' -z 117700 ']' 00:15:42.469 13:01:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:42.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:42.469 13:01:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.469 13:01:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:42.469 13:01:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.469 13:01:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.469 [2024-06-11 13:01:01.164042] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:42.469 [2024-06-11 13:01:01.164234] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.728 [2024-06-11 13:01:01.334696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.728 [2024-06-11 13:01:01.562208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.987 [2024-06-11 13:01:01.731466] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.245 13:01:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:43.245 13:01:02 -- common/autotest_common.sh@852 -- # return 0 00:15:43.245 13:01:02 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:43.502 [2024-06-11 13:01:02.325534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.502 [2024-06-11 13:01:02.325629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.502 [2024-06-11 13:01:02.325657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.502 [2024-06-11 13:01:02.325676] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.502 [2024-06-11 13:01:02.325683] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.503 [2024-06-11 13:01:02.325719] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.503 13:01:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.760 13:01:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.760 13:01:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.760 13:01:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.760 "name": "Existed_Raid", 00:15:43.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.760 "strip_size_kb": 64, 00:15:43.760 "state": "configuring", 00:15:43.760 "raid_level": "raid0", 00:15:43.760 "superblock": false, 00:15:43.760 "num_base_bdevs": 3, 00:15:43.760 "num_base_bdevs_discovered": 0, 00:15:43.760 "num_base_bdevs_operational": 3, 00:15:43.760 "base_bdevs_list": [ 00:15:43.760 { 00:15:43.760 "name": "BaseBdev1", 00:15:43.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.760 "is_configured": false, 00:15:43.760 "data_offset": 0, 00:15:43.760 "data_size": 0 00:15:43.760 }, 00:15:43.760 { 00:15:43.760 "name": "BaseBdev2", 00:15:43.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.760 "is_configured": false, 00:15:43.760 "data_offset": 0, 00:15:43.760 "data_size": 0 00:15:43.760 }, 00:15:43.760 { 00:15:43.760 "name": "BaseBdev3", 00:15:43.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.760 "is_configured": false, 00:15:43.760 "data_offset": 0, 00:15:43.760 "data_size": 0 00:15:43.760 } 00:15:43.760 ] 00:15:43.760 }' 00:15:43.760 13:01:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.760 13:01:02 -- common/autotest_common.sh@10 -- # set +x 00:15:44.696 13:01:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:44.696 [2024-06-11 13:01:03.473726] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.696 [2024-06-11 13:01:03.473780] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:44.696 13:01:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:44.955 [2024-06-11 13:01:03.665782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.955 [2024-06-11 13:01:03.665874] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.955 [2024-06-11 13:01:03.665900] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.955 [2024-06-11 13:01:03.665917] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.955 [2024-06-11 13:01:03.665923] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.955 [2024-06-11 13:01:03.665953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.955 13:01:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.214 [2024-06-11 13:01:03.892929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.214 BaseBdev1 00:15:45.214 13:01:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:45.214 13:01:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:45.214 13:01:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:45.214 13:01:03 -- common/autotest_common.sh@889 -- # local i 00:15:45.214 13:01:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:45.214 13:01:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:45.214 13:01:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.472 13:01:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.731 [ 00:15:45.731 { 00:15:45.731 "name": "BaseBdev1", 00:15:45.731 "aliases": [ 00:15:45.731 "52ea06c2-51c4-4e81-b74c-63b564882157" 00:15:45.731 ], 00:15:45.731 "product_name": "Malloc disk", 00:15:45.731 "block_size": 512, 00:15:45.731 "num_blocks": 65536, 00:15:45.731 "uuid": "52ea06c2-51c4-4e81-b74c-63b564882157", 00:15:45.731 "assigned_rate_limits": { 00:15:45.731 "rw_ios_per_sec": 0, 00:15:45.731 "rw_mbytes_per_sec": 0, 00:15:45.731 "r_mbytes_per_sec": 0, 00:15:45.731 "w_mbytes_per_sec": 0 00:15:45.731 }, 00:15:45.731 "claimed": true, 00:15:45.731 "claim_type": "exclusive_write", 00:15:45.731 "zoned": false, 00:15:45.731 "supported_io_types": { 00:15:45.731 "read": true, 00:15:45.731 "write": true, 00:15:45.731 "unmap": true, 00:15:45.731 "write_zeroes": true, 00:15:45.731 "flush": true, 00:15:45.731 "reset": true, 00:15:45.731 "compare": false, 00:15:45.731 "compare_and_write": false, 00:15:45.731 "abort": true, 00:15:45.731 "nvme_admin": false, 00:15:45.731 "nvme_io": false 00:15:45.731 }, 00:15:45.731 "memory_domains": [ 00:15:45.731 { 00:15:45.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.731 "dma_device_type": 2 00:15:45.731 } 00:15:45.731 ], 00:15:45.731 "driver_specific": {} 00:15:45.731 } 00:15:45.731 ] 00:15:45.731 13:01:04 -- common/autotest_common.sh@895 -- # return 0 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.731 13:01:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.990 13:01:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.990 "name": "Existed_Raid", 00:15:45.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.990 "strip_size_kb": 64, 00:15:45.990 "state": "configuring", 00:15:45.990 "raid_level": "raid0", 00:15:45.990 "superblock": false, 00:15:45.990 "num_base_bdevs": 3, 00:15:45.990 "num_base_bdevs_discovered": 1, 00:15:45.990 "num_base_bdevs_operational": 3, 00:15:45.990 "base_bdevs_list": [ 00:15:45.990 { 00:15:45.990 "name": "BaseBdev1", 00:15:45.990 "uuid": "52ea06c2-51c4-4e81-b74c-63b564882157", 00:15:45.990 "is_configured": true, 00:15:45.990 "data_offset": 0, 00:15:45.990 "data_size": 65536 00:15:45.990 }, 00:15:45.990 { 00:15:45.990 "name": "BaseBdev2", 00:15:45.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.990 "is_configured": false, 00:15:45.990 "data_offset": 0, 00:15:45.990 "data_size": 0 00:15:45.990 }, 00:15:45.990 { 00:15:45.990 "name": "BaseBdev3", 00:15:45.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.990 "is_configured": false, 00:15:45.990 "data_offset": 0, 00:15:45.990 "data_size": 0 00:15:45.990 } 00:15:45.990 ] 00:15:45.990 }' 00:15:45.990 13:01:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.990 13:01:04 -- common/autotest_common.sh@10 -- # set +x 00:15:46.563 13:01:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.822 [2024-06-11 13:01:05.481348] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.822 [2024-06-11 13:01:05.481404] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:46.822 13:01:05 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:46.822 13:01:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:47.081 [2024-06-11 13:01:05.717419] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.081 [2024-06-11 13:01:05.719107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.081 [2024-06-11 13:01:05.719177] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.081 [2024-06-11 13:01:05.719203] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:47.081 [2024-06-11 13:01:05.719226] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.081 13:01:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.340 13:01:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.340 "name": "Existed_Raid", 00:15:47.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.340 "strip_size_kb": 64, 00:15:47.340 "state": "configuring", 00:15:47.340 "raid_level": "raid0", 00:15:47.340 "superblock": false, 00:15:47.340 "num_base_bdevs": 3, 00:15:47.340 "num_base_bdevs_discovered": 1, 00:15:47.340 "num_base_bdevs_operational": 3, 00:15:47.340 "base_bdevs_list": [ 00:15:47.340 { 00:15:47.340 "name": "BaseBdev1", 00:15:47.340 "uuid": "52ea06c2-51c4-4e81-b74c-63b564882157", 00:15:47.340 "is_configured": true, 00:15:47.340 "data_offset": 0, 00:15:47.340 "data_size": 65536 00:15:47.340 }, 00:15:47.340 { 00:15:47.340 "name": "BaseBdev2", 00:15:47.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.340 "is_configured": false, 00:15:47.340 "data_offset": 0, 00:15:47.340 "data_size": 0 00:15:47.340 }, 00:15:47.340 { 00:15:47.340 "name": "BaseBdev3", 00:15:47.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.340 "is_configured": false, 00:15:47.340 "data_offset": 0, 00:15:47.340 "data_size": 0 00:15:47.340 } 00:15:47.340 ] 00:15:47.340 }' 00:15:47.340 13:01:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.340 13:01:05 -- common/autotest_common.sh@10 -- # set +x 00:15:47.906 13:01:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:48.164 [2024-06-11 13:01:06.852426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.164 BaseBdev2 00:15:48.164 13:01:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:48.164 13:01:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:48.164 13:01:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:48.164 13:01:06 -- common/autotest_common.sh@889 -- # local i 00:15:48.164 13:01:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:48.164 13:01:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:48.164 13:01:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.423 13:01:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:48.423 [ 00:15:48.423 { 00:15:48.423 "name": "BaseBdev2", 00:15:48.423 "aliases": [ 00:15:48.423 "92c317e8-b07c-4dd7-9d1f-28e5e1012232" 00:15:48.423 ], 00:15:48.423 "product_name": "Malloc disk", 00:15:48.423 "block_size": 512, 00:15:48.423 "num_blocks": 65536, 00:15:48.423 "uuid": "92c317e8-b07c-4dd7-9d1f-28e5e1012232", 00:15:48.423 "assigned_rate_limits": { 00:15:48.423 "rw_ios_per_sec": 0, 00:15:48.423 "rw_mbytes_per_sec": 0, 00:15:48.423 "r_mbytes_per_sec": 0, 00:15:48.423 "w_mbytes_per_sec": 0 00:15:48.423 }, 00:15:48.423 "claimed": true, 00:15:48.423 "claim_type": "exclusive_write", 00:15:48.423 "zoned": false, 00:15:48.423 "supported_io_types": { 00:15:48.423 "read": true, 00:15:48.423 "write": true, 00:15:48.423 "unmap": true, 00:15:48.423 "write_zeroes": true, 00:15:48.423 "flush": true, 00:15:48.423 "reset": true, 00:15:48.423 "compare": false, 00:15:48.423 "compare_and_write": false, 00:15:48.423 "abort": true, 00:15:48.423 "nvme_admin": false, 00:15:48.423 "nvme_io": false 00:15:48.423 }, 00:15:48.423 "memory_domains": [ 00:15:48.423 { 00:15:48.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.423 "dma_device_type": 2 00:15:48.423 } 00:15:48.423 ], 00:15:48.423 "driver_specific": {} 00:15:48.423 } 00:15:48.423 ] 00:15:48.423 13:01:07 -- common/autotest_common.sh@895 -- # return 0 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.423 13:01:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.682 13:01:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.682 "name": "Existed_Raid", 00:15:48.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.682 "strip_size_kb": 64, 00:15:48.682 "state": "configuring", 00:15:48.682 "raid_level": "raid0", 00:15:48.682 "superblock": false, 00:15:48.682 "num_base_bdevs": 3, 00:15:48.682 "num_base_bdevs_discovered": 2, 00:15:48.682 "num_base_bdevs_operational": 3, 00:15:48.682 "base_bdevs_list": [ 00:15:48.682 { 00:15:48.682 "name": "BaseBdev1", 00:15:48.682 "uuid": "52ea06c2-51c4-4e81-b74c-63b564882157", 00:15:48.682 "is_configured": true, 00:15:48.682 "data_offset": 0, 00:15:48.682 "data_size": 65536 00:15:48.682 }, 00:15:48.682 { 00:15:48.682 "name": "BaseBdev2", 00:15:48.682 "uuid": "92c317e8-b07c-4dd7-9d1f-28e5e1012232", 00:15:48.682 "is_configured": true, 00:15:48.682 "data_offset": 0, 00:15:48.682 "data_size": 65536 00:15:48.682 }, 00:15:48.682 { 00:15:48.682 "name": "BaseBdev3", 00:15:48.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.682 "is_configured": false, 00:15:48.682 "data_offset": 0, 00:15:48.682 "data_size": 0 00:15:48.682 } 00:15:48.682 ] 00:15:48.682 }' 00:15:48.682 13:01:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.682 13:01:07 -- common/autotest_common.sh@10 -- # set +x 00:15:49.617 13:01:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:49.617 [2024-06-11 13:01:08.298410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.617 [2024-06-11 13:01:08.298475] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:49.617 [2024-06-11 13:01:08.298484] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:49.617 [2024-06-11 13:01:08.298600] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:49.617 [2024-06-11 13:01:08.298967] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:49.617 [2024-06-11 13:01:08.298991] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:49.617 [2024-06-11 13:01:08.299243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.617 BaseBdev3 00:15:49.617 13:01:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:49.617 13:01:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:49.617 13:01:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:49.617 13:01:08 -- common/autotest_common.sh@889 -- # local i 00:15:49.617 13:01:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:49.617 13:01:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:49.617 13:01:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.875 13:01:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:50.134 [ 00:15:50.134 { 00:15:50.134 "name": "BaseBdev3", 00:15:50.134 "aliases": [ 00:15:50.134 "f78150d4-4a6d-459c-9942-3d7a6a42c47c" 00:15:50.134 ], 00:15:50.134 "product_name": "Malloc disk", 00:15:50.134 "block_size": 512, 00:15:50.134 "num_blocks": 65536, 00:15:50.134 "uuid": "f78150d4-4a6d-459c-9942-3d7a6a42c47c", 00:15:50.134 "assigned_rate_limits": { 00:15:50.134 "rw_ios_per_sec": 0, 00:15:50.134 "rw_mbytes_per_sec": 0, 00:15:50.134 "r_mbytes_per_sec": 0, 00:15:50.134 "w_mbytes_per_sec": 0 00:15:50.134 }, 00:15:50.134 "claimed": true, 00:15:50.134 "claim_type": "exclusive_write", 00:15:50.134 "zoned": false, 00:15:50.134 "supported_io_types": { 00:15:50.134 "read": true, 00:15:50.134 "write": true, 00:15:50.134 "unmap": true, 00:15:50.134 "write_zeroes": true, 00:15:50.134 "flush": true, 00:15:50.134 "reset": true, 00:15:50.134 "compare": false, 00:15:50.134 "compare_and_write": false, 00:15:50.134 "abort": true, 00:15:50.134 "nvme_admin": false, 00:15:50.134 "nvme_io": false 00:15:50.134 }, 00:15:50.134 "memory_domains": [ 00:15:50.134 { 00:15:50.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.134 "dma_device_type": 2 00:15:50.134 } 00:15:50.134 ], 00:15:50.134 "driver_specific": {} 00:15:50.134 } 00:15:50.134 ] 00:15:50.134 13:01:08 -- common/autotest_common.sh@895 -- # return 0 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.134 13:01:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.392 13:01:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.392 "name": "Existed_Raid", 00:15:50.392 "uuid": "fcc5264a-a0f6-482d-95a6-080c1d000f97", 00:15:50.392 "strip_size_kb": 64, 00:15:50.392 "state": "online", 00:15:50.392 "raid_level": "raid0", 00:15:50.392 "superblock": false, 00:15:50.392 "num_base_bdevs": 3, 00:15:50.392 "num_base_bdevs_discovered": 3, 00:15:50.392 "num_base_bdevs_operational": 3, 00:15:50.392 "base_bdevs_list": [ 00:15:50.392 { 00:15:50.392 "name": "BaseBdev1", 00:15:50.392 "uuid": "52ea06c2-51c4-4e81-b74c-63b564882157", 00:15:50.392 "is_configured": true, 00:15:50.392 "data_offset": 0, 00:15:50.392 "data_size": 65536 00:15:50.392 }, 00:15:50.392 { 00:15:50.392 "name": "BaseBdev2", 00:15:50.392 "uuid": "92c317e8-b07c-4dd7-9d1f-28e5e1012232", 00:15:50.392 "is_configured": true, 00:15:50.392 "data_offset": 0, 00:15:50.392 "data_size": 65536 00:15:50.392 }, 00:15:50.392 { 00:15:50.392 "name": "BaseBdev3", 00:15:50.392 "uuid": "f78150d4-4a6d-459c-9942-3d7a6a42c47c", 00:15:50.392 "is_configured": true, 00:15:50.392 "data_offset": 0, 00:15:50.392 "data_size": 65536 00:15:50.392 } 00:15:50.392 ] 00:15:50.392 }' 00:15:50.392 13:01:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.392 13:01:09 -- common/autotest_common.sh@10 -- # set +x 00:15:50.959 13:01:09 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:51.218 [2024-06-11 13:01:09.926908] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.218 [2024-06-11 13:01:09.926941] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.218 [2024-06-11 13:01:09.927021] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.218 13:01:09 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.218 13:01:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.476 13:01:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.476 "name": "Existed_Raid", 00:15:51.476 "uuid": "fcc5264a-a0f6-482d-95a6-080c1d000f97", 00:15:51.476 "strip_size_kb": 64, 00:15:51.476 "state": "offline", 00:15:51.476 "raid_level": "raid0", 00:15:51.476 "superblock": false, 00:15:51.476 "num_base_bdevs": 3, 00:15:51.476 "num_base_bdevs_discovered": 2, 00:15:51.476 "num_base_bdevs_operational": 2, 00:15:51.476 "base_bdevs_list": [ 00:15:51.476 { 00:15:51.476 "name": null, 00:15:51.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.476 "is_configured": false, 00:15:51.476 "data_offset": 0, 00:15:51.476 "data_size": 65536 00:15:51.476 }, 00:15:51.476 { 00:15:51.476 "name": "BaseBdev2", 00:15:51.476 "uuid": "92c317e8-b07c-4dd7-9d1f-28e5e1012232", 00:15:51.476 "is_configured": true, 00:15:51.476 "data_offset": 0, 00:15:51.476 "data_size": 65536 00:15:51.476 }, 00:15:51.476 { 00:15:51.476 "name": "BaseBdev3", 00:15:51.476 "uuid": "f78150d4-4a6d-459c-9942-3d7a6a42c47c", 00:15:51.476 "is_configured": true, 00:15:51.476 "data_offset": 0, 00:15:51.476 "data_size": 65536 00:15:51.476 } 00:15:51.476 ] 00:15:51.476 }' 00:15:51.476 13:01:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.476 13:01:10 -- common/autotest_common.sh@10 -- # set +x 00:15:52.411 13:01:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:52.411 13:01:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:52.411 13:01:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.411 13:01:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:52.411 13:01:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:52.411 13:01:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.411 13:01:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:52.729 [2024-06-11 13:01:11.421865] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.729 13:01:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:52.729 13:01:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:52.729 13:01:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.729 13:01:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:52.987 13:01:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:52.987 13:01:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.987 13:01:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:53.245 [2024-06-11 13:01:11.975646] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:53.245 [2024-06-11 13:01:11.975709] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:53.245 13:01:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:53.245 13:01:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:53.245 13:01:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.245 13:01:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.503 13:01:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:53.503 13:01:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:53.503 13:01:12 -- bdev/bdev_raid.sh@287 -- # killprocess 117700 00:15:53.503 13:01:12 -- common/autotest_common.sh@926 -- # '[' -z 117700 ']' 00:15:53.503 13:01:12 -- common/autotest_common.sh@930 -- # kill -0 117700 00:15:53.503 13:01:12 -- common/autotest_common.sh@931 -- # uname 00:15:53.503 13:01:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.503 13:01:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117700 00:15:53.503 killing process with pid 117700 00:15:53.503 13:01:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:53.503 13:01:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:53.503 13:01:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117700' 00:15:53.503 13:01:12 -- common/autotest_common.sh@945 -- # kill 117700 00:15:53.503 13:01:12 -- common/autotest_common.sh@950 -- # wait 117700 00:15:53.503 [2024-06-11 13:01:12.263424] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.503 [2024-06-11 13:01:12.263566] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.437 ************************************ 00:15:54.437 END TEST raid_state_function_test 00:15:54.437 ************************************ 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:54.437 00:15:54.437 real 0m12.107s 00:15:54.437 user 0m21.809s 00:15:54.437 sys 0m1.284s 00:15:54.437 13:01:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.437 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:54.437 13:01:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:54.437 13:01:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:54.437 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:54.437 ************************************ 00:15:54.437 START TEST raid_state_function_test_sb 00:15:54.437 ************************************ 00:15:54.437 13:01:13 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=118097 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118097' 00:15:54.437 Process raid pid: 118097 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118097 /var/tmp/spdk-raid.sock 00:15:54.437 13:01:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:54.437 13:01:13 -- common/autotest_common.sh@819 -- # '[' -z 118097 ']' 00:15:54.437 13:01:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:54.437 13:01:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:54.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:54.437 13:01:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:54.437 13:01:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:54.437 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 [2024-06-11 13:01:13.325186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:54.695 [2024-06-11 13:01:13.325378] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.695 [2024-06-11 13:01:13.494074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.953 [2024-06-11 13:01:13.698038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.212 [2024-06-11 13:01:13.874606] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.470 13:01:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:55.470 13:01:14 -- common/autotest_common.sh@852 -- # return 0 00:15:55.470 13:01:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:55.728 [2024-06-11 13:01:14.444180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.728 [2024-06-11 13:01:14.444242] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.728 [2024-06-11 13:01:14.444271] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.728 [2024-06-11 13:01:14.444290] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.728 [2024-06-11 13:01:14.444297] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.728 [2024-06-11 13:01:14.444336] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.728 13:01:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.985 13:01:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.985 "name": "Existed_Raid", 00:15:55.985 "uuid": "56e49408-18cc-48b3-a4ac-2a1db2458c34", 00:15:55.985 "strip_size_kb": 64, 00:15:55.985 "state": "configuring", 00:15:55.985 "raid_level": "raid0", 00:15:55.985 "superblock": true, 00:15:55.985 "num_base_bdevs": 3, 00:15:55.985 "num_base_bdevs_discovered": 0, 00:15:55.985 "num_base_bdevs_operational": 3, 00:15:55.985 "base_bdevs_list": [ 00:15:55.985 { 00:15:55.985 "name": "BaseBdev1", 00:15:55.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.985 "is_configured": false, 00:15:55.985 "data_offset": 0, 00:15:55.985 "data_size": 0 00:15:55.985 }, 00:15:55.985 { 00:15:55.985 "name": "BaseBdev2", 00:15:55.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.985 "is_configured": false, 00:15:55.985 "data_offset": 0, 00:15:55.985 "data_size": 0 00:15:55.985 }, 00:15:55.985 { 00:15:55.985 "name": "BaseBdev3", 00:15:55.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.985 "is_configured": false, 00:15:55.985 "data_offset": 0, 00:15:55.985 "data_size": 0 00:15:55.985 } 00:15:55.985 ] 00:15:55.985 }' 00:15:55.985 13:01:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.985 13:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:56.551 13:01:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:56.809 [2024-06-11 13:01:15.524296] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.809 [2024-06-11 13:01:15.524350] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:56.809 13:01:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:57.067 [2024-06-11 13:01:15.716374] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.067 [2024-06-11 13:01:15.716446] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.067 [2024-06-11 13:01:15.716458] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.067 [2024-06-11 13:01:15.716514] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.067 [2024-06-11 13:01:15.716522] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.067 [2024-06-11 13:01:15.716554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.067 13:01:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:57.325 [2024-06-11 13:01:15.934773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.325 BaseBdev1 00:15:57.325 13:01:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:57.325 13:01:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:57.325 13:01:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:57.325 13:01:15 -- common/autotest_common.sh@889 -- # local i 00:15:57.325 13:01:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:57.325 13:01:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:57.325 13:01:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.583 13:01:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:57.583 [ 00:15:57.583 { 00:15:57.583 "name": "BaseBdev1", 00:15:57.583 "aliases": [ 00:15:57.583 "5db3c14e-c469-4a14-b809-0464a2725328" 00:15:57.583 ], 00:15:57.583 "product_name": "Malloc disk", 00:15:57.583 "block_size": 512, 00:15:57.583 "num_blocks": 65536, 00:15:57.583 "uuid": "5db3c14e-c469-4a14-b809-0464a2725328", 00:15:57.583 "assigned_rate_limits": { 00:15:57.583 "rw_ios_per_sec": 0, 00:15:57.583 "rw_mbytes_per_sec": 0, 00:15:57.583 "r_mbytes_per_sec": 0, 00:15:57.583 "w_mbytes_per_sec": 0 00:15:57.583 }, 00:15:57.583 "claimed": true, 00:15:57.583 "claim_type": "exclusive_write", 00:15:57.583 "zoned": false, 00:15:57.583 "supported_io_types": { 00:15:57.583 "read": true, 00:15:57.583 "write": true, 00:15:57.583 "unmap": true, 00:15:57.583 "write_zeroes": true, 00:15:57.583 "flush": true, 00:15:57.583 "reset": true, 00:15:57.583 "compare": false, 00:15:57.583 "compare_and_write": false, 00:15:57.583 "abort": true, 00:15:57.583 "nvme_admin": false, 00:15:57.583 "nvme_io": false 00:15:57.583 }, 00:15:57.583 "memory_domains": [ 00:15:57.583 { 00:15:57.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.583 "dma_device_type": 2 00:15:57.583 } 00:15:57.583 ], 00:15:57.583 "driver_specific": {} 00:15:57.583 } 00:15:57.583 ] 00:15:57.583 13:01:16 -- common/autotest_common.sh@895 -- # return 0 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.583 13:01:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.842 13:01:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.842 "name": "Existed_Raid", 00:15:57.842 "uuid": "dac8681c-0a4e-4832-b9ed-1dad31231f50", 00:15:57.842 "strip_size_kb": 64, 00:15:57.842 "state": "configuring", 00:15:57.842 "raid_level": "raid0", 00:15:57.842 "superblock": true, 00:15:57.842 "num_base_bdevs": 3, 00:15:57.842 "num_base_bdevs_discovered": 1, 00:15:57.842 "num_base_bdevs_operational": 3, 00:15:57.842 "base_bdevs_list": [ 00:15:57.842 { 00:15:57.842 "name": "BaseBdev1", 00:15:57.842 "uuid": "5db3c14e-c469-4a14-b809-0464a2725328", 00:15:57.842 "is_configured": true, 00:15:57.842 "data_offset": 2048, 00:15:57.842 "data_size": 63488 00:15:57.842 }, 00:15:57.842 { 00:15:57.842 "name": "BaseBdev2", 00:15:57.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.842 "is_configured": false, 00:15:57.842 "data_offset": 0, 00:15:57.842 "data_size": 0 00:15:57.842 }, 00:15:57.842 { 00:15:57.842 "name": "BaseBdev3", 00:15:57.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.842 "is_configured": false, 00:15:57.842 "data_offset": 0, 00:15:57.842 "data_size": 0 00:15:57.842 } 00:15:57.842 ] 00:15:57.842 }' 00:15:57.842 13:01:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.842 13:01:16 -- common/autotest_common.sh@10 -- # set +x 00:15:58.408 13:01:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:58.667 [2024-06-11 13:01:17.407151] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.667 [2024-06-11 13:01:17.407223] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:58.667 13:01:17 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:58.667 13:01:17 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:58.925 13:01:17 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:59.183 BaseBdev1 00:15:59.183 13:01:17 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:59.183 13:01:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:59.183 13:01:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:59.183 13:01:17 -- common/autotest_common.sh@889 -- # local i 00:15:59.183 13:01:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:59.183 13:01:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:59.183 13:01:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.441 13:01:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:59.699 [ 00:15:59.699 { 00:15:59.699 "name": "BaseBdev1", 00:15:59.699 "aliases": [ 00:15:59.699 "019d72a1-e4db-4306-ac13-b66a31ca8687" 00:15:59.699 ], 00:15:59.699 "product_name": "Malloc disk", 00:15:59.699 "block_size": 512, 00:15:59.699 "num_blocks": 65536, 00:15:59.699 "uuid": "019d72a1-e4db-4306-ac13-b66a31ca8687", 00:15:59.699 "assigned_rate_limits": { 00:15:59.699 "rw_ios_per_sec": 0, 00:15:59.699 "rw_mbytes_per_sec": 0, 00:15:59.699 "r_mbytes_per_sec": 0, 00:15:59.699 "w_mbytes_per_sec": 0 00:15:59.699 }, 00:15:59.699 "claimed": false, 00:15:59.699 "zoned": false, 00:15:59.699 "supported_io_types": { 00:15:59.699 "read": true, 00:15:59.699 "write": true, 00:15:59.699 "unmap": true, 00:15:59.699 "write_zeroes": true, 00:15:59.699 "flush": true, 00:15:59.699 "reset": true, 00:15:59.699 "compare": false, 00:15:59.699 "compare_and_write": false, 00:15:59.699 "abort": true, 00:15:59.699 "nvme_admin": false, 00:15:59.699 "nvme_io": false 00:15:59.699 }, 00:15:59.699 "memory_domains": [ 00:15:59.699 { 00:15:59.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.699 "dma_device_type": 2 00:15:59.699 } 00:15:59.699 ], 00:15:59.699 "driver_specific": {} 00:15:59.699 } 00:15:59.699 ] 00:15:59.699 13:01:18 -- common/autotest_common.sh@895 -- # return 0 00:15:59.699 13:01:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:59.699 [2024-06-11 13:01:18.528330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.699 [2024-06-11 13:01:18.530337] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.699 [2024-06-11 13:01:18.530413] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.699 [2024-06-11 13:01:18.530441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:59.699 [2024-06-11 13:01:18.530464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.958 13:01:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.216 13:01:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.216 "name": "Existed_Raid", 00:16:00.216 "uuid": "bb089202-8681-4eed-bece-7650948999ed", 00:16:00.216 "strip_size_kb": 64, 00:16:00.216 "state": "configuring", 00:16:00.216 "raid_level": "raid0", 00:16:00.216 "superblock": true, 00:16:00.216 "num_base_bdevs": 3, 00:16:00.216 "num_base_bdevs_discovered": 1, 00:16:00.216 "num_base_bdevs_operational": 3, 00:16:00.216 "base_bdevs_list": [ 00:16:00.216 { 00:16:00.216 "name": "BaseBdev1", 00:16:00.216 "uuid": "019d72a1-e4db-4306-ac13-b66a31ca8687", 00:16:00.216 "is_configured": true, 00:16:00.216 "data_offset": 2048, 00:16:00.216 "data_size": 63488 00:16:00.216 }, 00:16:00.216 { 00:16:00.216 "name": "BaseBdev2", 00:16:00.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.216 "is_configured": false, 00:16:00.216 "data_offset": 0, 00:16:00.216 "data_size": 0 00:16:00.216 }, 00:16:00.216 { 00:16:00.216 "name": "BaseBdev3", 00:16:00.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.216 "is_configured": false, 00:16:00.216 "data_offset": 0, 00:16:00.216 "data_size": 0 00:16:00.216 } 00:16:00.216 ] 00:16:00.216 }' 00:16:00.216 13:01:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.216 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:16:00.781 13:01:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.039 [2024-06-11 13:01:19.707293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.039 BaseBdev2 00:16:01.039 13:01:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:01.039 13:01:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:01.039 13:01:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:01.039 13:01:19 -- common/autotest_common.sh@889 -- # local i 00:16:01.039 13:01:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:01.039 13:01:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:01.039 13:01:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.296 13:01:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.555 [ 00:16:01.555 { 00:16:01.555 "name": "BaseBdev2", 00:16:01.555 "aliases": [ 00:16:01.555 "d33a43a6-c025-41cd-b56e-7b0b287e10b8" 00:16:01.555 ], 00:16:01.555 "product_name": "Malloc disk", 00:16:01.555 "block_size": 512, 00:16:01.555 "num_blocks": 65536, 00:16:01.555 "uuid": "d33a43a6-c025-41cd-b56e-7b0b287e10b8", 00:16:01.555 "assigned_rate_limits": { 00:16:01.555 "rw_ios_per_sec": 0, 00:16:01.555 "rw_mbytes_per_sec": 0, 00:16:01.555 "r_mbytes_per_sec": 0, 00:16:01.555 "w_mbytes_per_sec": 0 00:16:01.555 }, 00:16:01.555 "claimed": true, 00:16:01.555 "claim_type": "exclusive_write", 00:16:01.555 "zoned": false, 00:16:01.555 "supported_io_types": { 00:16:01.555 "read": true, 00:16:01.555 "write": true, 00:16:01.555 "unmap": true, 00:16:01.555 "write_zeroes": true, 00:16:01.555 "flush": true, 00:16:01.555 "reset": true, 00:16:01.555 "compare": false, 00:16:01.555 "compare_and_write": false, 00:16:01.555 "abort": true, 00:16:01.555 "nvme_admin": false, 00:16:01.555 "nvme_io": false 00:16:01.555 }, 00:16:01.555 "memory_domains": [ 00:16:01.555 { 00:16:01.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.555 "dma_device_type": 2 00:16:01.555 } 00:16:01.555 ], 00:16:01.555 "driver_specific": {} 00:16:01.555 } 00:16:01.555 ] 00:16:01.555 13:01:20 -- common/autotest_common.sh@895 -- # return 0 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.555 13:01:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.813 13:01:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.813 "name": "Existed_Raid", 00:16:01.813 "uuid": "bb089202-8681-4eed-bece-7650948999ed", 00:16:01.813 "strip_size_kb": 64, 00:16:01.813 "state": "configuring", 00:16:01.813 "raid_level": "raid0", 00:16:01.813 "superblock": true, 00:16:01.813 "num_base_bdevs": 3, 00:16:01.813 "num_base_bdevs_discovered": 2, 00:16:01.813 "num_base_bdevs_operational": 3, 00:16:01.813 "base_bdevs_list": [ 00:16:01.813 { 00:16:01.813 "name": "BaseBdev1", 00:16:01.813 "uuid": "019d72a1-e4db-4306-ac13-b66a31ca8687", 00:16:01.813 "is_configured": true, 00:16:01.813 "data_offset": 2048, 00:16:01.813 "data_size": 63488 00:16:01.813 }, 00:16:01.813 { 00:16:01.813 "name": "BaseBdev2", 00:16:01.813 "uuid": "d33a43a6-c025-41cd-b56e-7b0b287e10b8", 00:16:01.813 "is_configured": true, 00:16:01.813 "data_offset": 2048, 00:16:01.813 "data_size": 63488 00:16:01.813 }, 00:16:01.813 { 00:16:01.813 "name": "BaseBdev3", 00:16:01.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.813 "is_configured": false, 00:16:01.813 "data_offset": 0, 00:16:01.813 "data_size": 0 00:16:01.813 } 00:16:01.813 ] 00:16:01.813 }' 00:16:01.813 13:01:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.813 13:01:20 -- common/autotest_common.sh@10 -- # set +x 00:16:02.379 13:01:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:02.636 [2024-06-11 13:01:21.347170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.636 [2024-06-11 13:01:21.347366] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:02.636 [2024-06-11 13:01:21.347380] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.636 [2024-06-11 13:01:21.347551] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:02.636 [2024-06-11 13:01:21.347901] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:02.636 [2024-06-11 13:01:21.347921] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:02.636 [2024-06-11 13:01:21.348094] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.636 BaseBdev3 00:16:02.636 13:01:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:02.636 13:01:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:02.636 13:01:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:02.636 13:01:21 -- common/autotest_common.sh@889 -- # local i 00:16:02.636 13:01:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:02.636 13:01:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:02.636 13:01:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.893 13:01:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:03.151 [ 00:16:03.151 { 00:16:03.151 "name": "BaseBdev3", 00:16:03.151 "aliases": [ 00:16:03.151 "d2fd59fc-a1f2-421c-8eae-04f7af1eae86" 00:16:03.151 ], 00:16:03.151 "product_name": "Malloc disk", 00:16:03.151 "block_size": 512, 00:16:03.151 "num_blocks": 65536, 00:16:03.151 "uuid": "d2fd59fc-a1f2-421c-8eae-04f7af1eae86", 00:16:03.151 "assigned_rate_limits": { 00:16:03.151 "rw_ios_per_sec": 0, 00:16:03.151 "rw_mbytes_per_sec": 0, 00:16:03.151 "r_mbytes_per_sec": 0, 00:16:03.151 "w_mbytes_per_sec": 0 00:16:03.151 }, 00:16:03.151 "claimed": true, 00:16:03.151 "claim_type": "exclusive_write", 00:16:03.151 "zoned": false, 00:16:03.151 "supported_io_types": { 00:16:03.151 "read": true, 00:16:03.151 "write": true, 00:16:03.151 "unmap": true, 00:16:03.151 "write_zeroes": true, 00:16:03.151 "flush": true, 00:16:03.151 "reset": true, 00:16:03.151 "compare": false, 00:16:03.151 "compare_and_write": false, 00:16:03.151 "abort": true, 00:16:03.151 "nvme_admin": false, 00:16:03.151 "nvme_io": false 00:16:03.151 }, 00:16:03.151 "memory_domains": [ 00:16:03.151 { 00:16:03.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.151 "dma_device_type": 2 00:16:03.151 } 00:16:03.151 ], 00:16:03.151 "driver_specific": {} 00:16:03.151 } 00:16:03.151 ] 00:16:03.151 13:01:21 -- common/autotest_common.sh@895 -- # return 0 00:16:03.151 13:01:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:03.151 13:01:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:03.151 13:01:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:03.151 13:01:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.152 13:01:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.410 13:01:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.410 "name": "Existed_Raid", 00:16:03.410 "uuid": "bb089202-8681-4eed-bece-7650948999ed", 00:16:03.410 "strip_size_kb": 64, 00:16:03.410 "state": "online", 00:16:03.410 "raid_level": "raid0", 00:16:03.410 "superblock": true, 00:16:03.410 "num_base_bdevs": 3, 00:16:03.410 "num_base_bdevs_discovered": 3, 00:16:03.410 "num_base_bdevs_operational": 3, 00:16:03.410 "base_bdevs_list": [ 00:16:03.410 { 00:16:03.410 "name": "BaseBdev1", 00:16:03.410 "uuid": "019d72a1-e4db-4306-ac13-b66a31ca8687", 00:16:03.410 "is_configured": true, 00:16:03.410 "data_offset": 2048, 00:16:03.410 "data_size": 63488 00:16:03.410 }, 00:16:03.410 { 00:16:03.410 "name": "BaseBdev2", 00:16:03.410 "uuid": "d33a43a6-c025-41cd-b56e-7b0b287e10b8", 00:16:03.410 "is_configured": true, 00:16:03.410 "data_offset": 2048, 00:16:03.410 "data_size": 63488 00:16:03.410 }, 00:16:03.410 { 00:16:03.410 "name": "BaseBdev3", 00:16:03.410 "uuid": "d2fd59fc-a1f2-421c-8eae-04f7af1eae86", 00:16:03.410 "is_configured": true, 00:16:03.410 "data_offset": 2048, 00:16:03.410 "data_size": 63488 00:16:03.410 } 00:16:03.410 ] 00:16:03.410 }' 00:16:03.410 13:01:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.410 13:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:03.976 13:01:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:04.234 [2024-06-11 13:01:22.903628] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.234 [2024-06-11 13:01:22.903661] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.234 [2024-06-11 13:01:22.903722] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.234 13:01:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.492 13:01:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.492 "name": "Existed_Raid", 00:16:04.492 "uuid": "bb089202-8681-4eed-bece-7650948999ed", 00:16:04.492 "strip_size_kb": 64, 00:16:04.492 "state": "offline", 00:16:04.492 "raid_level": "raid0", 00:16:04.492 "superblock": true, 00:16:04.492 "num_base_bdevs": 3, 00:16:04.492 "num_base_bdevs_discovered": 2, 00:16:04.492 "num_base_bdevs_operational": 2, 00:16:04.492 "base_bdevs_list": [ 00:16:04.492 { 00:16:04.492 "name": null, 00:16:04.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.492 "is_configured": false, 00:16:04.492 "data_offset": 2048, 00:16:04.492 "data_size": 63488 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "name": "BaseBdev2", 00:16:04.492 "uuid": "d33a43a6-c025-41cd-b56e-7b0b287e10b8", 00:16:04.492 "is_configured": true, 00:16:04.492 "data_offset": 2048, 00:16:04.492 "data_size": 63488 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "name": "BaseBdev3", 00:16:04.492 "uuid": "d2fd59fc-a1f2-421c-8eae-04f7af1eae86", 00:16:04.492 "is_configured": true, 00:16:04.492 "data_offset": 2048, 00:16:04.492 "data_size": 63488 00:16:04.492 } 00:16:04.492 ] 00:16:04.492 }' 00:16:04.492 13:01:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.492 13:01:23 -- common/autotest_common.sh@10 -- # set +x 00:16:05.084 13:01:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:05.084 13:01:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:05.084 13:01:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.084 13:01:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:05.342 13:01:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:05.342 13:01:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.342 13:01:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:05.601 [2024-06-11 13:01:24.344177] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.859 13:01:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:05.859 13:01:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:05.859 13:01:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.859 13:01:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:05.859 13:01:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:05.859 13:01:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.859 13:01:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:06.117 [2024-06-11 13:01:24.882782] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:06.117 [2024-06-11 13:01:24.882857] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:06.375 13:01:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:06.375 13:01:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:06.375 13:01:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.375 13:01:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:06.375 13:01:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:06.375 13:01:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:06.375 13:01:25 -- bdev/bdev_raid.sh@287 -- # killprocess 118097 00:16:06.375 13:01:25 -- common/autotest_common.sh@926 -- # '[' -z 118097 ']' 00:16:06.375 13:01:25 -- common/autotest_common.sh@930 -- # kill -0 118097 00:16:06.375 13:01:25 -- common/autotest_common.sh@931 -- # uname 00:16:06.375 13:01:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.375 13:01:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118097 00:16:06.634 killing process with pid 118097 00:16:06.634 13:01:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:06.634 13:01:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:06.634 13:01:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118097' 00:16:06.634 13:01:25 -- common/autotest_common.sh@945 -- # kill 118097 00:16:06.634 13:01:25 -- common/autotest_common.sh@950 -- # wait 118097 00:16:06.634 [2024-06-11 13:01:25.218234] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.634 [2024-06-11 13:01:25.218362] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.570 ************************************ 00:16:07.570 END TEST raid_state_function_test_sb 00:16:07.570 ************************************ 00:16:07.570 13:01:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:07.570 00:16:07.570 real 0m12.911s 00:16:07.570 user 0m23.230s 00:16:07.570 sys 0m1.255s 00:16:07.570 13:01:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.570 13:01:26 -- common/autotest_common.sh@10 -- # set +x 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:07.571 13:01:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:07.571 13:01:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.571 13:01:26 -- common/autotest_common.sh@10 -- # set +x 00:16:07.571 ************************************ 00:16:07.571 START TEST raid_superblock_test 00:16:07.571 ************************************ 00:16:07.571 13:01:26 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=118505 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118505 /var/tmp/spdk-raid.sock 00:16:07.571 13:01:26 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:07.571 13:01:26 -- common/autotest_common.sh@819 -- # '[' -z 118505 ']' 00:16:07.571 13:01:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:07.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:07.571 13:01:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:07.571 13:01:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:07.571 13:01:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:07.571 13:01:26 -- common/autotest_common.sh@10 -- # set +x 00:16:07.571 [2024-06-11 13:01:26.282524] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:07.571 [2024-06-11 13:01:26.282692] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118505 ] 00:16:07.829 [2024-06-11 13:01:26.439428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.829 [2024-06-11 13:01:26.667623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.088 [2024-06-11 13:01:26.837730] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.346 13:01:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:08.346 13:01:27 -- common/autotest_common.sh@852 -- # return 0 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:08.346 13:01:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:08.604 malloc1 00:16:08.604 13:01:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:08.862 [2024-06-11 13:01:27.580954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:08.862 [2024-06-11 13:01:27.581734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.862 [2024-06-11 13:01:27.581816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:08.862 [2024-06-11 13:01:27.581966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.862 [2024-06-11 13:01:27.588181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.862 [2024-06-11 13:01:27.588289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:08.862 pt1 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:08.862 13:01:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:09.120 malloc2 00:16:09.120 13:01:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.379 [2024-06-11 13:01:28.015913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.379 [2024-06-11 13:01:28.016000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.379 [2024-06-11 13:01:28.016041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:09.379 [2024-06-11 13:01:28.016091] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.379 [2024-06-11 13:01:28.018306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.379 [2024-06-11 13:01:28.018368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.379 pt2 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:09.379 13:01:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:09.638 malloc3 00:16:09.638 13:01:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:09.638 [2024-06-11 13:01:28.417975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:09.638 [2024-06-11 13:01:28.418058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.638 [2024-06-11 13:01:28.418096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:09.638 [2024-06-11 13:01:28.418134] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.638 [2024-06-11 13:01:28.420208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.638 [2024-06-11 13:01:28.420259] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:09.638 pt3 00:16:09.638 13:01:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:09.638 13:01:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:09.638 13:01:28 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:09.897 [2024-06-11 13:01:28.610166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:09.897 [2024-06-11 13:01:28.612879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.897 [2024-06-11 13:01:28.613031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:09.897 [2024-06-11 13:01:28.613808] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:09.897 [2024-06-11 13:01:28.613845] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.897 [2024-06-11 13:01:28.614087] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:09.897 [2024-06-11 13:01:28.614910] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:09.897 [2024-06-11 13:01:28.614961] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:09.897 [2024-06-11 13:01:28.615423] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.897 13:01:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.155 13:01:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.155 "name": "raid_bdev1", 00:16:10.155 "uuid": "c1c2bd79-1b18-4fb4-8c41-7feb53d0f830", 00:16:10.155 "strip_size_kb": 64, 00:16:10.155 "state": "online", 00:16:10.155 "raid_level": "raid0", 00:16:10.155 "superblock": true, 00:16:10.155 "num_base_bdevs": 3, 00:16:10.155 "num_base_bdevs_discovered": 3, 00:16:10.155 "num_base_bdevs_operational": 3, 00:16:10.155 "base_bdevs_list": [ 00:16:10.155 { 00:16:10.155 "name": "pt1", 00:16:10.155 "uuid": "f52700b1-34c8-5df2-b04e-fce401dbfaa7", 00:16:10.155 "is_configured": true, 00:16:10.155 "data_offset": 2048, 00:16:10.156 "data_size": 63488 00:16:10.156 }, 00:16:10.156 { 00:16:10.156 "name": "pt2", 00:16:10.156 "uuid": "4599f518-0d28-5203-8630-0eb2a4b38915", 00:16:10.156 "is_configured": true, 00:16:10.156 "data_offset": 2048, 00:16:10.156 "data_size": 63488 00:16:10.156 }, 00:16:10.156 { 00:16:10.156 "name": "pt3", 00:16:10.156 "uuid": "54d41a39-81f3-5e16-91a2-9fc598adc5b6", 00:16:10.156 "is_configured": true, 00:16:10.156 "data_offset": 2048, 00:16:10.156 "data_size": 63488 00:16:10.156 } 00:16:10.156 ] 00:16:10.156 }' 00:16:10.156 13:01:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.156 13:01:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.723 13:01:29 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:10.723 13:01:29 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:10.982 [2024-06-11 13:01:29.710123] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.982 13:01:29 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c1c2bd79-1b18-4fb4-8c41-7feb53d0f830 00:16:10.982 13:01:29 -- bdev/bdev_raid.sh@380 -- # '[' -z c1c2bd79-1b18-4fb4-8c41-7feb53d0f830 ']' 00:16:10.982 13:01:29 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:11.241 [2024-06-11 13:01:29.909921] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.241 [2024-06-11 13:01:29.909953] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.241 [2024-06-11 13:01:29.910037] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.241 [2024-06-11 13:01:29.910159] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.241 [2024-06-11 13:01:29.910179] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:11.241 13:01:29 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.241 13:01:29 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:11.500 13:01:30 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:11.500 13:01:30 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:11.500 13:01:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:11.500 13:01:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:11.759 13:01:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:11.759 13:01:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:11.759 13:01:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:11.759 13:01:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:12.018 13:01:30 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:12.018 13:01:30 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:12.277 13:01:30 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:12.277 13:01:30 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:12.277 13:01:30 -- common/autotest_common.sh@640 -- # local es=0 00:16:12.277 13:01:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:12.277 13:01:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.277 13:01:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.277 13:01:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.277 13:01:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.277 13:01:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.277 13:01:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:12.277 13:01:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.277 13:01:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:12.277 13:01:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:12.535 [2024-06-11 13:01:31.146330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:12.535 [2024-06-11 13:01:31.148827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:12.535 [2024-06-11 13:01:31.148904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:12.535 [2024-06-11 13:01:31.148998] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:12.535 [2024-06-11 13:01:31.149671] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:12.535 [2024-06-11 13:01:31.149882] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:12.535 [2024-06-11 13:01:31.150108] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.535 [2024-06-11 13:01:31.150132] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:16:12.535 request: 00:16:12.535 { 00:16:12.535 "name": "raid_bdev1", 00:16:12.535 "raid_level": "raid0", 00:16:12.535 "base_bdevs": [ 00:16:12.535 "malloc1", 00:16:12.535 "malloc2", 00:16:12.535 "malloc3" 00:16:12.535 ], 00:16:12.535 "superblock": false, 00:16:12.535 "strip_size_kb": 64, 00:16:12.535 "method": "bdev_raid_create", 00:16:12.535 "req_id": 1 00:16:12.535 } 00:16:12.535 Got JSON-RPC error response 00:16:12.535 response: 00:16:12.535 { 00:16:12.535 "code": -17, 00:16:12.535 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:12.535 } 00:16:12.535 13:01:31 -- common/autotest_common.sh@643 -- # es=1 00:16:12.535 13:01:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:12.535 13:01:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:12.535 13:01:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:12.535 13:01:31 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:12.535 13:01:31 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.535 13:01:31 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:12.535 13:01:31 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:12.535 13:01:31 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:12.794 [2024-06-11 13:01:31.606409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:12.794 [2024-06-11 13:01:31.606603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.794 [2024-06-11 13:01:31.606732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:12.794 [2024-06-11 13:01:31.606848] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.794 [2024-06-11 13:01:31.608800] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.794 [2024-06-11 13:01:31.608933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:12.794 [2024-06-11 13:01:31.609157] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:12.794 [2024-06-11 13:01:31.609215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:12.794 pt1 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.794 13:01:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.052 13:01:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.052 "name": "raid_bdev1", 00:16:13.052 "uuid": "c1c2bd79-1b18-4fb4-8c41-7feb53d0f830", 00:16:13.052 "strip_size_kb": 64, 00:16:13.052 "state": "configuring", 00:16:13.052 "raid_level": "raid0", 00:16:13.052 "superblock": true, 00:16:13.052 "num_base_bdevs": 3, 00:16:13.052 "num_base_bdevs_discovered": 1, 00:16:13.052 "num_base_bdevs_operational": 3, 00:16:13.052 "base_bdevs_list": [ 00:16:13.052 { 00:16:13.052 "name": "pt1", 00:16:13.052 "uuid": "f52700b1-34c8-5df2-b04e-fce401dbfaa7", 00:16:13.052 "is_configured": true, 00:16:13.052 "data_offset": 2048, 00:16:13.052 "data_size": 63488 00:16:13.052 }, 00:16:13.052 { 00:16:13.052 "name": null, 00:16:13.052 "uuid": "4599f518-0d28-5203-8630-0eb2a4b38915", 00:16:13.052 "is_configured": false, 00:16:13.052 "data_offset": 2048, 00:16:13.052 "data_size": 63488 00:16:13.052 }, 00:16:13.052 { 00:16:13.052 "name": null, 00:16:13.052 "uuid": "54d41a39-81f3-5e16-91a2-9fc598adc5b6", 00:16:13.052 "is_configured": false, 00:16:13.052 "data_offset": 2048, 00:16:13.052 "data_size": 63488 00:16:13.052 } 00:16:13.052 ] 00:16:13.052 }' 00:16:13.052 13:01:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.052 13:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:13.984 13:01:32 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:13.984 13:01:32 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:13.984 [2024-06-11 13:01:32.650667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:13.984 [2024-06-11 13:01:32.651169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.984 [2024-06-11 13:01:32.651375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:13.984 [2024-06-11 13:01:32.651490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.984 [2024-06-11 13:01:32.652073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.984 [2024-06-11 13:01:32.652224] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:13.984 [2024-06-11 13:01:32.652485] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:13.984 [2024-06-11 13:01:32.652529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.984 pt2 00:16:13.984 13:01:32 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:14.241 [2024-06-11 13:01:32.902731] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.241 13:01:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.499 13:01:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.499 "name": "raid_bdev1", 00:16:14.499 "uuid": "c1c2bd79-1b18-4fb4-8c41-7feb53d0f830", 00:16:14.499 "strip_size_kb": 64, 00:16:14.499 "state": "configuring", 00:16:14.499 "raid_level": "raid0", 00:16:14.499 "superblock": true, 00:16:14.499 "num_base_bdevs": 3, 00:16:14.499 "num_base_bdevs_discovered": 1, 00:16:14.499 "num_base_bdevs_operational": 3, 00:16:14.499 "base_bdevs_list": [ 00:16:14.499 { 00:16:14.499 "name": "pt1", 00:16:14.499 "uuid": "f52700b1-34c8-5df2-b04e-fce401dbfaa7", 00:16:14.499 "is_configured": true, 00:16:14.499 "data_offset": 2048, 00:16:14.499 "data_size": 63488 00:16:14.499 }, 00:16:14.499 { 00:16:14.499 "name": null, 00:16:14.499 "uuid": "4599f518-0d28-5203-8630-0eb2a4b38915", 00:16:14.499 "is_configured": false, 00:16:14.499 "data_offset": 2048, 00:16:14.499 "data_size": 63488 00:16:14.499 }, 00:16:14.499 { 00:16:14.499 "name": null, 00:16:14.499 "uuid": "54d41a39-81f3-5e16-91a2-9fc598adc5b6", 00:16:14.499 "is_configured": false, 00:16:14.499 "data_offset": 2048, 00:16:14.499 "data_size": 63488 00:16:14.499 } 00:16:14.499 ] 00:16:14.499 }' 00:16:14.499 13:01:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.499 13:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:15.065 13:01:33 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:15.065 13:01:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:15.065 13:01:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.325 [2024-06-11 13:01:34.058051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.326 [2024-06-11 13:01:34.058172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.326 [2024-06-11 13:01:34.058212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:15.326 [2024-06-11 13:01:34.058247] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.326 [2024-06-11 13:01:34.058794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.326 [2024-06-11 13:01:34.058831] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.326 [2024-06-11 13:01:34.058945] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:15.326 [2024-06-11 13:01:34.058974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.326 pt2 00:16:15.326 13:01:34 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:15.326 13:01:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:15.326 13:01:34 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:15.584 [2024-06-11 13:01:34.294096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:15.584 [2024-06-11 13:01:34.294172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.584 [2024-06-11 13:01:34.294206] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:15.584 [2024-06-11 13:01:34.294231] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.584 [2024-06-11 13:01:34.294634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.584 [2024-06-11 13:01:34.294680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:15.584 [2024-06-11 13:01:34.294816] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:15.584 [2024-06-11 13:01:34.294842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:15.584 [2024-06-11 13:01:34.294957] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:15.584 [2024-06-11 13:01:34.294970] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:15.584 [2024-06-11 13:01:34.295075] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:15.584 [2024-06-11 13:01:34.295404] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:15.584 [2024-06-11 13:01:34.295417] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:15.584 [2024-06-11 13:01:34.295538] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.584 pt3 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.584 13:01:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.843 13:01:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:15.843 "name": "raid_bdev1", 00:16:15.843 "uuid": "c1c2bd79-1b18-4fb4-8c41-7feb53d0f830", 00:16:15.843 "strip_size_kb": 64, 00:16:15.843 "state": "online", 00:16:15.843 "raid_level": "raid0", 00:16:15.843 "superblock": true, 00:16:15.843 "num_base_bdevs": 3, 00:16:15.843 "num_base_bdevs_discovered": 3, 00:16:15.843 "num_base_bdevs_operational": 3, 00:16:15.843 "base_bdevs_list": [ 00:16:15.843 { 00:16:15.843 "name": "pt1", 00:16:15.843 "uuid": "f52700b1-34c8-5df2-b04e-fce401dbfaa7", 00:16:15.843 "is_configured": true, 00:16:15.843 "data_offset": 2048, 00:16:15.843 "data_size": 63488 00:16:15.843 }, 00:16:15.843 { 00:16:15.843 "name": "pt2", 00:16:15.843 "uuid": "4599f518-0d28-5203-8630-0eb2a4b38915", 00:16:15.843 "is_configured": true, 00:16:15.843 "data_offset": 2048, 00:16:15.843 "data_size": 63488 00:16:15.843 }, 00:16:15.843 { 00:16:15.843 "name": "pt3", 00:16:15.843 "uuid": "54d41a39-81f3-5e16-91a2-9fc598adc5b6", 00:16:15.843 "is_configured": true, 00:16:15.843 "data_offset": 2048, 00:16:15.843 "data_size": 63488 00:16:15.843 } 00:16:15.843 ] 00:16:15.843 }' 00:16:15.843 13:01:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:15.843 13:01:34 -- common/autotest_common.sh@10 -- # set +x 00:16:16.410 13:01:35 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:16.410 13:01:35 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:16.683 [2024-06-11 13:01:35.398643] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.683 13:01:35 -- bdev/bdev_raid.sh@430 -- # '[' c1c2bd79-1b18-4fb4-8c41-7feb53d0f830 '!=' c1c2bd79-1b18-4fb4-8c41-7feb53d0f830 ']' 00:16:16.683 13:01:35 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:16.683 13:01:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:16.684 13:01:35 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:16.684 13:01:35 -- bdev/bdev_raid.sh@511 -- # killprocess 118505 00:16:16.684 13:01:35 -- common/autotest_common.sh@926 -- # '[' -z 118505 ']' 00:16:16.684 13:01:35 -- common/autotest_common.sh@930 -- # kill -0 118505 00:16:16.684 13:01:35 -- common/autotest_common.sh@931 -- # uname 00:16:16.684 13:01:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.684 13:01:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118505 00:16:16.684 killing process with pid 118505 00:16:16.684 13:01:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:16.684 13:01:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:16.684 13:01:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118505' 00:16:16.684 13:01:35 -- common/autotest_common.sh@945 -- # kill 118505 00:16:16.684 13:01:35 -- common/autotest_common.sh@950 -- # wait 118505 00:16:16.684 [2024-06-11 13:01:35.433934] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.684 [2024-06-11 13:01:35.434004] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.684 [2024-06-11 13:01:35.434060] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.684 [2024-06-11 13:01:35.434070] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:16.955 [2024-06-11 13:01:35.638226] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.890 ************************************ 00:16:17.890 END TEST raid_superblock_test 00:16:17.890 ************************************ 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:17.890 00:16:17.890 real 0m10.345s 00:16:17.890 user 0m18.170s 00:16:17.890 sys 0m1.170s 00:16:17.890 13:01:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.890 13:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:17.890 13:01:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:17.890 13:01:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.890 13:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:17.890 ************************************ 00:16:17.890 START TEST raid_state_function_test 00:16:17.890 ************************************ 00:16:17.890 13:01:36 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=118832 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118832' 00:16:17.890 Process raid pid: 118832 00:16:17.890 13:01:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118832 /var/tmp/spdk-raid.sock 00:16:17.890 13:01:36 -- common/autotest_common.sh@819 -- # '[' -z 118832 ']' 00:16:17.890 13:01:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:17.890 13:01:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.890 13:01:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:17.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:17.890 13:01:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.890 13:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:17.890 [2024-06-11 13:01:36.698917] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:17.890 [2024-06-11 13:01:36.699102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.148 [2024-06-11 13:01:36.885829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.407 [2024-06-11 13:01:37.156746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.665 [2024-06-11 13:01:37.335501] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.923 13:01:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.923 13:01:37 -- common/autotest_common.sh@852 -- # return 0 00:16:18.923 13:01:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:19.181 [2024-06-11 13:01:37.874764] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.181 [2024-06-11 13:01:37.874833] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.181 [2024-06-11 13:01:37.874862] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.181 [2024-06-11 13:01:37.874883] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.181 [2024-06-11 13:01:37.874890] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.181 [2024-06-11 13:01:37.874931] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.181 13:01:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.440 13:01:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.440 "name": "Existed_Raid", 00:16:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.440 "strip_size_kb": 64, 00:16:19.440 "state": "configuring", 00:16:19.440 "raid_level": "concat", 00:16:19.440 "superblock": false, 00:16:19.440 "num_base_bdevs": 3, 00:16:19.440 "num_base_bdevs_discovered": 0, 00:16:19.440 "num_base_bdevs_operational": 3, 00:16:19.440 "base_bdevs_list": [ 00:16:19.440 { 00:16:19.440 "name": "BaseBdev1", 00:16:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.440 "is_configured": false, 00:16:19.440 "data_offset": 0, 00:16:19.440 "data_size": 0 00:16:19.440 }, 00:16:19.440 { 00:16:19.440 "name": "BaseBdev2", 00:16:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.440 "is_configured": false, 00:16:19.440 "data_offset": 0, 00:16:19.440 "data_size": 0 00:16:19.440 }, 00:16:19.440 { 00:16:19.440 "name": "BaseBdev3", 00:16:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.440 "is_configured": false, 00:16:19.440 "data_offset": 0, 00:16:19.440 "data_size": 0 00:16:19.440 } 00:16:19.440 ] 00:16:19.440 }' 00:16:19.440 13:01:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.440 13:01:38 -- common/autotest_common.sh@10 -- # set +x 00:16:20.007 13:01:38 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:20.266 [2024-06-11 13:01:38.994914] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.266 [2024-06-11 13:01:38.994953] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:20.266 13:01:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:20.525 [2024-06-11 13:01:39.206988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.525 [2024-06-11 13:01:39.207073] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.525 [2024-06-11 13:01:39.207102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.525 [2024-06-11 13:01:39.207121] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.525 [2024-06-11 13:01:39.207128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.525 [2024-06-11 13:01:39.207160] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.525 13:01:39 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.783 [2024-06-11 13:01:39.502148] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.783 BaseBdev1 00:16:20.783 13:01:39 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:20.783 13:01:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:20.783 13:01:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:20.783 13:01:39 -- common/autotest_common.sh@889 -- # local i 00:16:20.783 13:01:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:20.783 13:01:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:20.783 13:01:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.041 13:01:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.299 [ 00:16:21.299 { 00:16:21.299 "name": "BaseBdev1", 00:16:21.299 "aliases": [ 00:16:21.299 "cba27ede-d35c-4974-8ce5-d31763aeee86" 00:16:21.299 ], 00:16:21.299 "product_name": "Malloc disk", 00:16:21.299 "block_size": 512, 00:16:21.299 "num_blocks": 65536, 00:16:21.299 "uuid": "cba27ede-d35c-4974-8ce5-d31763aeee86", 00:16:21.299 "assigned_rate_limits": { 00:16:21.299 "rw_ios_per_sec": 0, 00:16:21.299 "rw_mbytes_per_sec": 0, 00:16:21.299 "r_mbytes_per_sec": 0, 00:16:21.299 "w_mbytes_per_sec": 0 00:16:21.299 }, 00:16:21.299 "claimed": true, 00:16:21.299 "claim_type": "exclusive_write", 00:16:21.299 "zoned": false, 00:16:21.299 "supported_io_types": { 00:16:21.299 "read": true, 00:16:21.299 "write": true, 00:16:21.299 "unmap": true, 00:16:21.299 "write_zeroes": true, 00:16:21.299 "flush": true, 00:16:21.299 "reset": true, 00:16:21.299 "compare": false, 00:16:21.299 "compare_and_write": false, 00:16:21.299 "abort": true, 00:16:21.299 "nvme_admin": false, 00:16:21.299 "nvme_io": false 00:16:21.299 }, 00:16:21.299 "memory_domains": [ 00:16:21.299 { 00:16:21.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.299 "dma_device_type": 2 00:16:21.299 } 00:16:21.299 ], 00:16:21.299 "driver_specific": {} 00:16:21.299 } 00:16:21.299 ] 00:16:21.299 13:01:39 -- common/autotest_common.sh@895 -- # return 0 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.299 13:01:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.558 13:01:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.558 "name": "Existed_Raid", 00:16:21.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.558 "strip_size_kb": 64, 00:16:21.558 "state": "configuring", 00:16:21.558 "raid_level": "concat", 00:16:21.558 "superblock": false, 00:16:21.558 "num_base_bdevs": 3, 00:16:21.558 "num_base_bdevs_discovered": 1, 00:16:21.558 "num_base_bdevs_operational": 3, 00:16:21.558 "base_bdevs_list": [ 00:16:21.558 { 00:16:21.558 "name": "BaseBdev1", 00:16:21.558 "uuid": "cba27ede-d35c-4974-8ce5-d31763aeee86", 00:16:21.558 "is_configured": true, 00:16:21.558 "data_offset": 0, 00:16:21.558 "data_size": 65536 00:16:21.558 }, 00:16:21.558 { 00:16:21.558 "name": "BaseBdev2", 00:16:21.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.558 "is_configured": false, 00:16:21.558 "data_offset": 0, 00:16:21.558 "data_size": 0 00:16:21.558 }, 00:16:21.558 { 00:16:21.558 "name": "BaseBdev3", 00:16:21.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.558 "is_configured": false, 00:16:21.558 "data_offset": 0, 00:16:21.558 "data_size": 0 00:16:21.558 } 00:16:21.558 ] 00:16:21.558 }' 00:16:21.558 13:01:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.558 13:01:40 -- common/autotest_common.sh@10 -- # set +x 00:16:22.126 13:01:40 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:22.384 [2024-06-11 13:01:41.086555] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.384 [2024-06-11 13:01:41.086628] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:22.384 13:01:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:22.384 13:01:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:22.643 [2024-06-11 13:01:41.350616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.643 [2024-06-11 13:01:41.352495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.643 [2024-06-11 13:01:41.352551] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.643 [2024-06-11 13:01:41.352579] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.643 [2024-06-11 13:01:41.352605] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.643 13:01:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.902 13:01:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.902 "name": "Existed_Raid", 00:16:22.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.902 "strip_size_kb": 64, 00:16:22.902 "state": "configuring", 00:16:22.902 "raid_level": "concat", 00:16:22.902 "superblock": false, 00:16:22.902 "num_base_bdevs": 3, 00:16:22.902 "num_base_bdevs_discovered": 1, 00:16:22.902 "num_base_bdevs_operational": 3, 00:16:22.902 "base_bdevs_list": [ 00:16:22.902 { 00:16:22.902 "name": "BaseBdev1", 00:16:22.902 "uuid": "cba27ede-d35c-4974-8ce5-d31763aeee86", 00:16:22.902 "is_configured": true, 00:16:22.902 "data_offset": 0, 00:16:22.902 "data_size": 65536 00:16:22.902 }, 00:16:22.902 { 00:16:22.902 "name": "BaseBdev2", 00:16:22.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.902 "is_configured": false, 00:16:22.902 "data_offset": 0, 00:16:22.902 "data_size": 0 00:16:22.902 }, 00:16:22.902 { 00:16:22.902 "name": "BaseBdev3", 00:16:22.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.902 "is_configured": false, 00:16:22.902 "data_offset": 0, 00:16:22.902 "data_size": 0 00:16:22.902 } 00:16:22.902 ] 00:16:22.902 }' 00:16:22.902 13:01:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.902 13:01:41 -- common/autotest_common.sh@10 -- # set +x 00:16:23.836 13:01:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.836 [2024-06-11 13:01:42.602930] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.836 BaseBdev2 00:16:23.836 13:01:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:23.836 13:01:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:23.836 13:01:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:23.836 13:01:42 -- common/autotest_common.sh@889 -- # local i 00:16:23.836 13:01:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:23.836 13:01:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:23.836 13:01:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.094 13:01:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:24.352 [ 00:16:24.352 { 00:16:24.352 "name": "BaseBdev2", 00:16:24.352 "aliases": [ 00:16:24.352 "49ebf12f-8604-4ca4-be03-cf94a072bac6" 00:16:24.352 ], 00:16:24.352 "product_name": "Malloc disk", 00:16:24.352 "block_size": 512, 00:16:24.352 "num_blocks": 65536, 00:16:24.352 "uuid": "49ebf12f-8604-4ca4-be03-cf94a072bac6", 00:16:24.352 "assigned_rate_limits": { 00:16:24.352 "rw_ios_per_sec": 0, 00:16:24.352 "rw_mbytes_per_sec": 0, 00:16:24.352 "r_mbytes_per_sec": 0, 00:16:24.352 "w_mbytes_per_sec": 0 00:16:24.352 }, 00:16:24.352 "claimed": true, 00:16:24.352 "claim_type": "exclusive_write", 00:16:24.352 "zoned": false, 00:16:24.352 "supported_io_types": { 00:16:24.352 "read": true, 00:16:24.352 "write": true, 00:16:24.352 "unmap": true, 00:16:24.352 "write_zeroes": true, 00:16:24.352 "flush": true, 00:16:24.352 "reset": true, 00:16:24.352 "compare": false, 00:16:24.352 "compare_and_write": false, 00:16:24.352 "abort": true, 00:16:24.352 "nvme_admin": false, 00:16:24.352 "nvme_io": false 00:16:24.352 }, 00:16:24.352 "memory_domains": [ 00:16:24.352 { 00:16:24.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.352 "dma_device_type": 2 00:16:24.352 } 00:16:24.352 ], 00:16:24.352 "driver_specific": {} 00:16:24.352 } 00:16:24.352 ] 00:16:24.352 13:01:43 -- common/autotest_common.sh@895 -- # return 0 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.352 13:01:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.610 13:01:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.610 "name": "Existed_Raid", 00:16:24.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.610 "strip_size_kb": 64, 00:16:24.610 "state": "configuring", 00:16:24.610 "raid_level": "concat", 00:16:24.610 "superblock": false, 00:16:24.610 "num_base_bdevs": 3, 00:16:24.610 "num_base_bdevs_discovered": 2, 00:16:24.610 "num_base_bdevs_operational": 3, 00:16:24.610 "base_bdevs_list": [ 00:16:24.610 { 00:16:24.610 "name": "BaseBdev1", 00:16:24.610 "uuid": "cba27ede-d35c-4974-8ce5-d31763aeee86", 00:16:24.610 "is_configured": true, 00:16:24.610 "data_offset": 0, 00:16:24.610 "data_size": 65536 00:16:24.610 }, 00:16:24.610 { 00:16:24.610 "name": "BaseBdev2", 00:16:24.610 "uuid": "49ebf12f-8604-4ca4-be03-cf94a072bac6", 00:16:24.610 "is_configured": true, 00:16:24.610 "data_offset": 0, 00:16:24.610 "data_size": 65536 00:16:24.610 }, 00:16:24.610 { 00:16:24.610 "name": "BaseBdev3", 00:16:24.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.610 "is_configured": false, 00:16:24.610 "data_offset": 0, 00:16:24.610 "data_size": 0 00:16:24.610 } 00:16:24.610 ] 00:16:24.610 }' 00:16:24.610 13:01:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.610 13:01:43 -- common/autotest_common.sh@10 -- # set +x 00:16:25.175 13:01:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:25.433 [2024-06-11 13:01:44.184452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.433 [2024-06-11 13:01:44.184951] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:25.433 [2024-06-11 13:01:44.184995] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:25.433 [2024-06-11 13:01:44.185231] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:25.433 [2024-06-11 13:01:44.185699] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:25.433 [2024-06-11 13:01:44.185857] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:25.433 [2024-06-11 13:01:44.186211] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.433 BaseBdev3 00:16:25.433 13:01:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:25.433 13:01:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:25.433 13:01:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:25.433 13:01:44 -- common/autotest_common.sh@889 -- # local i 00:16:25.433 13:01:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:25.433 13:01:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:25.433 13:01:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:25.691 13:01:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.949 [ 00:16:25.949 { 00:16:25.949 "name": "BaseBdev3", 00:16:25.949 "aliases": [ 00:16:25.949 "5a513d21-add5-4be3-9503-79bdb2902820" 00:16:25.949 ], 00:16:25.949 "product_name": "Malloc disk", 00:16:25.949 "block_size": 512, 00:16:25.949 "num_blocks": 65536, 00:16:25.949 "uuid": "5a513d21-add5-4be3-9503-79bdb2902820", 00:16:25.949 "assigned_rate_limits": { 00:16:25.949 "rw_ios_per_sec": 0, 00:16:25.949 "rw_mbytes_per_sec": 0, 00:16:25.949 "r_mbytes_per_sec": 0, 00:16:25.949 "w_mbytes_per_sec": 0 00:16:25.949 }, 00:16:25.949 "claimed": true, 00:16:25.949 "claim_type": "exclusive_write", 00:16:25.949 "zoned": false, 00:16:25.950 "supported_io_types": { 00:16:25.950 "read": true, 00:16:25.950 "write": true, 00:16:25.950 "unmap": true, 00:16:25.950 "write_zeroes": true, 00:16:25.950 "flush": true, 00:16:25.950 "reset": true, 00:16:25.950 "compare": false, 00:16:25.950 "compare_and_write": false, 00:16:25.950 "abort": true, 00:16:25.950 "nvme_admin": false, 00:16:25.950 "nvme_io": false 00:16:25.950 }, 00:16:25.950 "memory_domains": [ 00:16:25.950 { 00:16:25.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.950 "dma_device_type": 2 00:16:25.950 } 00:16:25.950 ], 00:16:25.950 "driver_specific": {} 00:16:25.950 } 00:16:25.950 ] 00:16:25.950 13:01:44 -- common/autotest_common.sh@895 -- # return 0 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.950 "name": "Existed_Raid", 00:16:25.950 "uuid": "9501ca94-4884-44a0-80b4-9464a500a342", 00:16:25.950 "strip_size_kb": 64, 00:16:25.950 "state": "online", 00:16:25.950 "raid_level": "concat", 00:16:25.950 "superblock": false, 00:16:25.950 "num_base_bdevs": 3, 00:16:25.950 "num_base_bdevs_discovered": 3, 00:16:25.950 "num_base_bdevs_operational": 3, 00:16:25.950 "base_bdevs_list": [ 00:16:25.950 { 00:16:25.950 "name": "BaseBdev1", 00:16:25.950 "uuid": "cba27ede-d35c-4974-8ce5-d31763aeee86", 00:16:25.950 "is_configured": true, 00:16:25.950 "data_offset": 0, 00:16:25.950 "data_size": 65536 00:16:25.950 }, 00:16:25.950 { 00:16:25.950 "name": "BaseBdev2", 00:16:25.950 "uuid": "49ebf12f-8604-4ca4-be03-cf94a072bac6", 00:16:25.950 "is_configured": true, 00:16:25.950 "data_offset": 0, 00:16:25.950 "data_size": 65536 00:16:25.950 }, 00:16:25.950 { 00:16:25.950 "name": "BaseBdev3", 00:16:25.950 "uuid": "5a513d21-add5-4be3-9503-79bdb2902820", 00:16:25.950 "is_configured": true, 00:16:25.950 "data_offset": 0, 00:16:25.950 "data_size": 65536 00:16:25.950 } 00:16:25.950 ] 00:16:25.950 }' 00:16:25.950 13:01:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.950 13:01:44 -- common/autotest_common.sh@10 -- # set +x 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:26.884 [2024-06-11 13:01:45.641029] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.884 [2024-06-11 13:01:45.641228] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.884 [2024-06-11 13:01:45.641388] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.884 13:01:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.142 13:01:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.143 "name": "Existed_Raid", 00:16:27.143 "uuid": "9501ca94-4884-44a0-80b4-9464a500a342", 00:16:27.143 "strip_size_kb": 64, 00:16:27.143 "state": "offline", 00:16:27.143 "raid_level": "concat", 00:16:27.143 "superblock": false, 00:16:27.143 "num_base_bdevs": 3, 00:16:27.143 "num_base_bdevs_discovered": 2, 00:16:27.143 "num_base_bdevs_operational": 2, 00:16:27.143 "base_bdevs_list": [ 00:16:27.143 { 00:16:27.143 "name": null, 00:16:27.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.143 "is_configured": false, 00:16:27.143 "data_offset": 0, 00:16:27.143 "data_size": 65536 00:16:27.143 }, 00:16:27.143 { 00:16:27.143 "name": "BaseBdev2", 00:16:27.143 "uuid": "49ebf12f-8604-4ca4-be03-cf94a072bac6", 00:16:27.143 "is_configured": true, 00:16:27.143 "data_offset": 0, 00:16:27.143 "data_size": 65536 00:16:27.143 }, 00:16:27.143 { 00:16:27.143 "name": "BaseBdev3", 00:16:27.143 "uuid": "5a513d21-add5-4be3-9503-79bdb2902820", 00:16:27.143 "is_configured": true, 00:16:27.143 "data_offset": 0, 00:16:27.143 "data_size": 65536 00:16:27.143 } 00:16:27.143 ] 00:16:27.143 }' 00:16:27.143 13:01:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.143 13:01:45 -- common/autotest_common.sh@10 -- # set +x 00:16:28.078 13:01:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:28.078 13:01:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.078 13:01:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.078 13:01:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:28.078 13:01:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:28.078 13:01:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.078 13:01:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:28.337 [2024-06-11 13:01:47.038030] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.337 13:01:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:28.337 13:01:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.337 13:01:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.337 13:01:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:28.595 13:01:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:28.595 13:01:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.595 13:01:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:28.856 [2024-06-11 13:01:47.561183] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.856 [2024-06-11 13:01:47.561378] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:28.856 13:01:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:28.856 13:01:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.856 13:01:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.856 13:01:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:29.115 13:01:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:29.115 13:01:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:29.115 13:01:47 -- bdev/bdev_raid.sh@287 -- # killprocess 118832 00:16:29.115 13:01:47 -- common/autotest_common.sh@926 -- # '[' -z 118832 ']' 00:16:29.115 13:01:47 -- common/autotest_common.sh@930 -- # kill -0 118832 00:16:29.115 13:01:47 -- common/autotest_common.sh@931 -- # uname 00:16:29.115 13:01:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:29.115 13:01:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118832 00:16:29.115 killing process with pid 118832 00:16:29.116 13:01:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:29.116 13:01:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:29.116 13:01:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118832' 00:16:29.116 13:01:47 -- common/autotest_common.sh@945 -- # kill 118832 00:16:29.116 13:01:47 -- common/autotest_common.sh@950 -- # wait 118832 00:16:29.116 [2024-06-11 13:01:47.921298] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.116 [2024-06-11 13:01:47.921473] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.050 ************************************ 00:16:30.051 END TEST raid_state_function_test 00:16:30.051 ************************************ 00:16:30.051 13:01:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:30.051 00:16:30.051 real 0m12.249s 00:16:30.051 user 0m21.882s 00:16:30.051 sys 0m1.389s 00:16:30.051 13:01:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.051 13:01:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:30.309 13:01:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:30.309 13:01:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.309 13:01:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.309 ************************************ 00:16:30.309 START TEST raid_state_function_test_sb 00:16:30.309 ************************************ 00:16:30.309 13:01:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.309 13:01:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=119227 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119227' 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:30.310 Process raid pid: 119227 00:16:30.310 13:01:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119227 /var/tmp/spdk-raid.sock 00:16:30.310 13:01:48 -- common/autotest_common.sh@819 -- # '[' -z 119227 ']' 00:16:30.310 13:01:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:30.310 13:01:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.310 13:01:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:30.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:30.310 13:01:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.310 13:01:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.310 [2024-06-11 13:01:49.006203] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:30.310 [2024-06-11 13:01:49.006614] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.568 [2024-06-11 13:01:49.174425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.568 [2024-06-11 13:01:49.348925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.827 [2024-06-11 13:01:49.524833] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.394 13:01:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:31.394 13:01:49 -- common/autotest_common.sh@852 -- # return 0 00:16:31.394 13:01:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:31.394 [2024-06-11 13:01:50.224895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.394 [2024-06-11 13:01:50.225137] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.394 [2024-06-11 13:01:50.225278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.394 [2024-06-11 13:01:50.225406] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.394 [2024-06-11 13:01:50.225527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.394 [2024-06-11 13:01:50.225679] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.652 13:01:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:31.652 13:01:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.652 13:01:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.652 13:01:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:31.652 13:01:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.653 13:01:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.653 13:01:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.653 13:01:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.653 13:01:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.653 13:01:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.653 13:01:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.653 13:01:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.910 13:01:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.910 "name": "Existed_Raid", 00:16:31.910 "uuid": "e5af32f8-084b-4956-b953-a291c7b739b0", 00:16:31.910 "strip_size_kb": 64, 00:16:31.910 "state": "configuring", 00:16:31.910 "raid_level": "concat", 00:16:31.910 "superblock": true, 00:16:31.910 "num_base_bdevs": 3, 00:16:31.910 "num_base_bdevs_discovered": 0, 00:16:31.910 "num_base_bdevs_operational": 3, 00:16:31.910 "base_bdevs_list": [ 00:16:31.910 { 00:16:31.910 "name": "BaseBdev1", 00:16:31.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.910 "is_configured": false, 00:16:31.911 "data_offset": 0, 00:16:31.911 "data_size": 0 00:16:31.911 }, 00:16:31.911 { 00:16:31.911 "name": "BaseBdev2", 00:16:31.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.911 "is_configured": false, 00:16:31.911 "data_offset": 0, 00:16:31.911 "data_size": 0 00:16:31.911 }, 00:16:31.911 { 00:16:31.911 "name": "BaseBdev3", 00:16:31.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.911 "is_configured": false, 00:16:31.911 "data_offset": 0, 00:16:31.911 "data_size": 0 00:16:31.911 } 00:16:31.911 ] 00:16:31.911 }' 00:16:31.911 13:01:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.911 13:01:50 -- common/autotest_common.sh@10 -- # set +x 00:16:32.477 13:01:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.736 [2024-06-11 13:01:51.360961] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.736 [2024-06-11 13:01:51.361200] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:32.736 13:01:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:32.995 [2024-06-11 13:01:51.613089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.996 [2024-06-11 13:01:51.614525] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.996 [2024-06-11 13:01:51.614649] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.996 [2024-06-11 13:01:51.614766] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.996 [2024-06-11 13:01:51.614872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.996 [2024-06-11 13:01:51.614938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.996 13:01:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.996 [2024-06-11 13:01:51.830924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.996 BaseBdev1 00:16:33.254 13:01:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:33.254 13:01:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:33.254 13:01:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:33.254 13:01:51 -- common/autotest_common.sh@889 -- # local i 00:16:33.254 13:01:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:33.254 13:01:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:33.254 13:01:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.254 13:01:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.512 [ 00:16:33.512 { 00:16:33.512 "name": "BaseBdev1", 00:16:33.512 "aliases": [ 00:16:33.512 "90ec3c63-8a41-4c37-96e5-41a9387ceb2d" 00:16:33.512 ], 00:16:33.512 "product_name": "Malloc disk", 00:16:33.512 "block_size": 512, 00:16:33.512 "num_blocks": 65536, 00:16:33.512 "uuid": "90ec3c63-8a41-4c37-96e5-41a9387ceb2d", 00:16:33.512 "assigned_rate_limits": { 00:16:33.512 "rw_ios_per_sec": 0, 00:16:33.512 "rw_mbytes_per_sec": 0, 00:16:33.512 "r_mbytes_per_sec": 0, 00:16:33.512 "w_mbytes_per_sec": 0 00:16:33.512 }, 00:16:33.512 "claimed": true, 00:16:33.512 "claim_type": "exclusive_write", 00:16:33.512 "zoned": false, 00:16:33.512 "supported_io_types": { 00:16:33.512 "read": true, 00:16:33.512 "write": true, 00:16:33.512 "unmap": true, 00:16:33.512 "write_zeroes": true, 00:16:33.512 "flush": true, 00:16:33.512 "reset": true, 00:16:33.512 "compare": false, 00:16:33.512 "compare_and_write": false, 00:16:33.512 "abort": true, 00:16:33.512 "nvme_admin": false, 00:16:33.512 "nvme_io": false 00:16:33.512 }, 00:16:33.512 "memory_domains": [ 00:16:33.512 { 00:16:33.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.512 "dma_device_type": 2 00:16:33.512 } 00:16:33.512 ], 00:16:33.512 "driver_specific": {} 00:16:33.512 } 00:16:33.512 ] 00:16:33.512 13:01:52 -- common/autotest_common.sh@895 -- # return 0 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.512 13:01:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.769 13:01:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.769 "name": "Existed_Raid", 00:16:33.769 "uuid": "9f2a78d3-b8b9-4eb8-a61e-0ce818b70f4c", 00:16:33.769 "strip_size_kb": 64, 00:16:33.769 "state": "configuring", 00:16:33.769 "raid_level": "concat", 00:16:33.769 "superblock": true, 00:16:33.769 "num_base_bdevs": 3, 00:16:33.769 "num_base_bdevs_discovered": 1, 00:16:33.769 "num_base_bdevs_operational": 3, 00:16:33.769 "base_bdevs_list": [ 00:16:33.769 { 00:16:33.769 "name": "BaseBdev1", 00:16:33.769 "uuid": "90ec3c63-8a41-4c37-96e5-41a9387ceb2d", 00:16:33.769 "is_configured": true, 00:16:33.769 "data_offset": 2048, 00:16:33.769 "data_size": 63488 00:16:33.769 }, 00:16:33.769 { 00:16:33.769 "name": "BaseBdev2", 00:16:33.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.769 "is_configured": false, 00:16:33.769 "data_offset": 0, 00:16:33.769 "data_size": 0 00:16:33.769 }, 00:16:33.769 { 00:16:33.769 "name": "BaseBdev3", 00:16:33.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.769 "is_configured": false, 00:16:33.769 "data_offset": 0, 00:16:33.769 "data_size": 0 00:16:33.769 } 00:16:33.769 ] 00:16:33.769 }' 00:16:33.769 13:01:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.769 13:01:52 -- common/autotest_common.sh@10 -- # set +x 00:16:34.334 13:01:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.592 [2024-06-11 13:01:53.323290] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.592 [2024-06-11 13:01:53.323464] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:34.592 13:01:53 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:34.592 13:01:53 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:34.851 13:01:53 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.109 BaseBdev1 00:16:35.109 13:01:53 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:35.109 13:01:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:35.109 13:01:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:35.109 13:01:53 -- common/autotest_common.sh@889 -- # local i 00:16:35.109 13:01:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:35.110 13:01:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:35.110 13:01:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.368 13:01:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.627 [ 00:16:35.627 { 00:16:35.627 "name": "BaseBdev1", 00:16:35.627 "aliases": [ 00:16:35.627 "8f367888-b419-4aaa-abe9-3838862dc2ee" 00:16:35.627 ], 00:16:35.627 "product_name": "Malloc disk", 00:16:35.627 "block_size": 512, 00:16:35.627 "num_blocks": 65536, 00:16:35.627 "uuid": "8f367888-b419-4aaa-abe9-3838862dc2ee", 00:16:35.627 "assigned_rate_limits": { 00:16:35.627 "rw_ios_per_sec": 0, 00:16:35.627 "rw_mbytes_per_sec": 0, 00:16:35.627 "r_mbytes_per_sec": 0, 00:16:35.627 "w_mbytes_per_sec": 0 00:16:35.627 }, 00:16:35.627 "claimed": false, 00:16:35.627 "zoned": false, 00:16:35.627 "supported_io_types": { 00:16:35.627 "read": true, 00:16:35.627 "write": true, 00:16:35.627 "unmap": true, 00:16:35.627 "write_zeroes": true, 00:16:35.627 "flush": true, 00:16:35.627 "reset": true, 00:16:35.627 "compare": false, 00:16:35.627 "compare_and_write": false, 00:16:35.627 "abort": true, 00:16:35.627 "nvme_admin": false, 00:16:35.627 "nvme_io": false 00:16:35.627 }, 00:16:35.627 "memory_domains": [ 00:16:35.627 { 00:16:35.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.627 "dma_device_type": 2 00:16:35.627 } 00:16:35.627 ], 00:16:35.627 "driver_specific": {} 00:16:35.627 } 00:16:35.627 ] 00:16:35.627 13:01:54 -- common/autotest_common.sh@895 -- # return 0 00:16:35.627 13:01:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:35.886 [2024-06-11 13:01:54.499362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.886 [2024-06-11 13:01:54.501178] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.886 [2024-06-11 13:01:54.501379] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.886 [2024-06-11 13:01:54.501534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.886 [2024-06-11 13:01:54.501597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.886 "name": "Existed_Raid", 00:16:35.886 "uuid": "45843114-4f0d-49c8-831b-0197e5f09c95", 00:16:35.886 "strip_size_kb": 64, 00:16:35.886 "state": "configuring", 00:16:35.886 "raid_level": "concat", 00:16:35.886 "superblock": true, 00:16:35.886 "num_base_bdevs": 3, 00:16:35.886 "num_base_bdevs_discovered": 1, 00:16:35.886 "num_base_bdevs_operational": 3, 00:16:35.886 "base_bdevs_list": [ 00:16:35.886 { 00:16:35.886 "name": "BaseBdev1", 00:16:35.886 "uuid": "8f367888-b419-4aaa-abe9-3838862dc2ee", 00:16:35.886 "is_configured": true, 00:16:35.886 "data_offset": 2048, 00:16:35.886 "data_size": 63488 00:16:35.886 }, 00:16:35.886 { 00:16:35.886 "name": "BaseBdev2", 00:16:35.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.886 "is_configured": false, 00:16:35.886 "data_offset": 0, 00:16:35.886 "data_size": 0 00:16:35.886 }, 00:16:35.886 { 00:16:35.886 "name": "BaseBdev3", 00:16:35.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.886 "is_configured": false, 00:16:35.886 "data_offset": 0, 00:16:35.886 "data_size": 0 00:16:35.886 } 00:16:35.886 ] 00:16:35.886 }' 00:16:35.886 13:01:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.886 13:01:54 -- common/autotest_common.sh@10 -- # set +x 00:16:36.821 13:01:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:36.821 [2024-06-11 13:01:55.641928] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:36.821 BaseBdev2 00:16:37.080 13:01:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:37.080 13:01:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:37.080 13:01:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:37.080 13:01:55 -- common/autotest_common.sh@889 -- # local i 00:16:37.080 13:01:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:37.080 13:01:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:37.080 13:01:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.080 13:01:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.338 [ 00:16:37.338 { 00:16:37.338 "name": "BaseBdev2", 00:16:37.338 "aliases": [ 00:16:37.338 "52a6556f-b5d1-4339-bc53-c85f5e36f116" 00:16:37.338 ], 00:16:37.338 "product_name": "Malloc disk", 00:16:37.338 "block_size": 512, 00:16:37.338 "num_blocks": 65536, 00:16:37.338 "uuid": "52a6556f-b5d1-4339-bc53-c85f5e36f116", 00:16:37.338 "assigned_rate_limits": { 00:16:37.338 "rw_ios_per_sec": 0, 00:16:37.338 "rw_mbytes_per_sec": 0, 00:16:37.338 "r_mbytes_per_sec": 0, 00:16:37.338 "w_mbytes_per_sec": 0 00:16:37.338 }, 00:16:37.338 "claimed": true, 00:16:37.338 "claim_type": "exclusive_write", 00:16:37.338 "zoned": false, 00:16:37.338 "supported_io_types": { 00:16:37.338 "read": true, 00:16:37.338 "write": true, 00:16:37.338 "unmap": true, 00:16:37.338 "write_zeroes": true, 00:16:37.338 "flush": true, 00:16:37.338 "reset": true, 00:16:37.338 "compare": false, 00:16:37.338 "compare_and_write": false, 00:16:37.338 "abort": true, 00:16:37.338 "nvme_admin": false, 00:16:37.338 "nvme_io": false 00:16:37.338 }, 00:16:37.338 "memory_domains": [ 00:16:37.338 { 00:16:37.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.338 "dma_device_type": 2 00:16:37.338 } 00:16:37.338 ], 00:16:37.338 "driver_specific": {} 00:16:37.338 } 00:16:37.338 ] 00:16:37.338 13:01:56 -- common/autotest_common.sh@895 -- # return 0 00:16:37.338 13:01:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:37.338 13:01:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.339 13:01:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.597 13:01:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.597 "name": "Existed_Raid", 00:16:37.597 "uuid": "45843114-4f0d-49c8-831b-0197e5f09c95", 00:16:37.597 "strip_size_kb": 64, 00:16:37.597 "state": "configuring", 00:16:37.597 "raid_level": "concat", 00:16:37.597 "superblock": true, 00:16:37.597 "num_base_bdevs": 3, 00:16:37.597 "num_base_bdevs_discovered": 2, 00:16:37.597 "num_base_bdevs_operational": 3, 00:16:37.597 "base_bdevs_list": [ 00:16:37.597 { 00:16:37.597 "name": "BaseBdev1", 00:16:37.597 "uuid": "8f367888-b419-4aaa-abe9-3838862dc2ee", 00:16:37.597 "is_configured": true, 00:16:37.597 "data_offset": 2048, 00:16:37.597 "data_size": 63488 00:16:37.597 }, 00:16:37.597 { 00:16:37.597 "name": "BaseBdev2", 00:16:37.597 "uuid": "52a6556f-b5d1-4339-bc53-c85f5e36f116", 00:16:37.597 "is_configured": true, 00:16:37.597 "data_offset": 2048, 00:16:37.597 "data_size": 63488 00:16:37.597 }, 00:16:37.597 { 00:16:37.597 "name": "BaseBdev3", 00:16:37.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.597 "is_configured": false, 00:16:37.597 "data_offset": 0, 00:16:37.597 "data_size": 0 00:16:37.597 } 00:16:37.597 ] 00:16:37.597 }' 00:16:37.597 13:01:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.597 13:01:56 -- common/autotest_common.sh@10 -- # set +x 00:16:38.163 13:01:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.731 [2024-06-11 13:01:57.267860] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.731 [2024-06-11 13:01:57.268310] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:38.731 [2024-06-11 13:01:57.268491] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:38.731 BaseBdev3 00:16:38.731 [2024-06-11 13:01:57.268650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:38.731 [2024-06-11 13:01:57.269070] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:38.731 [2024-06-11 13:01:57.269237] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:38.731 [2024-06-11 13:01:57.269567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.731 13:01:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:38.731 13:01:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:38.731 13:01:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:38.731 13:01:57 -- common/autotest_common.sh@889 -- # local i 00:16:38.731 13:01:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:38.731 13:01:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:38.731 13:01:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.731 13:01:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.989 [ 00:16:38.989 { 00:16:38.989 "name": "BaseBdev3", 00:16:38.989 "aliases": [ 00:16:38.989 "632c4503-57ca-4d1d-bc29-1f3e74c400f3" 00:16:38.989 ], 00:16:38.989 "product_name": "Malloc disk", 00:16:38.989 "block_size": 512, 00:16:38.989 "num_blocks": 65536, 00:16:38.989 "uuid": "632c4503-57ca-4d1d-bc29-1f3e74c400f3", 00:16:38.989 "assigned_rate_limits": { 00:16:38.989 "rw_ios_per_sec": 0, 00:16:38.989 "rw_mbytes_per_sec": 0, 00:16:38.989 "r_mbytes_per_sec": 0, 00:16:38.989 "w_mbytes_per_sec": 0 00:16:38.989 }, 00:16:38.989 "claimed": true, 00:16:38.989 "claim_type": "exclusive_write", 00:16:38.989 "zoned": false, 00:16:38.989 "supported_io_types": { 00:16:38.989 "read": true, 00:16:38.989 "write": true, 00:16:38.989 "unmap": true, 00:16:38.989 "write_zeroes": true, 00:16:38.989 "flush": true, 00:16:38.989 "reset": true, 00:16:38.989 "compare": false, 00:16:38.989 "compare_and_write": false, 00:16:38.989 "abort": true, 00:16:38.989 "nvme_admin": false, 00:16:38.989 "nvme_io": false 00:16:38.989 }, 00:16:38.989 "memory_domains": [ 00:16:38.989 { 00:16:38.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.989 "dma_device_type": 2 00:16:38.989 } 00:16:38.989 ], 00:16:38.989 "driver_specific": {} 00:16:38.989 } 00:16:38.989 ] 00:16:38.989 13:01:57 -- common/autotest_common.sh@895 -- # return 0 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.989 13:01:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.247 13:01:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.247 "name": "Existed_Raid", 00:16:39.247 "uuid": "45843114-4f0d-49c8-831b-0197e5f09c95", 00:16:39.247 "strip_size_kb": 64, 00:16:39.247 "state": "online", 00:16:39.247 "raid_level": "concat", 00:16:39.247 "superblock": true, 00:16:39.247 "num_base_bdevs": 3, 00:16:39.247 "num_base_bdevs_discovered": 3, 00:16:39.247 "num_base_bdevs_operational": 3, 00:16:39.247 "base_bdevs_list": [ 00:16:39.247 { 00:16:39.247 "name": "BaseBdev1", 00:16:39.247 "uuid": "8f367888-b419-4aaa-abe9-3838862dc2ee", 00:16:39.247 "is_configured": true, 00:16:39.247 "data_offset": 2048, 00:16:39.247 "data_size": 63488 00:16:39.247 }, 00:16:39.247 { 00:16:39.247 "name": "BaseBdev2", 00:16:39.247 "uuid": "52a6556f-b5d1-4339-bc53-c85f5e36f116", 00:16:39.247 "is_configured": true, 00:16:39.247 "data_offset": 2048, 00:16:39.247 "data_size": 63488 00:16:39.247 }, 00:16:39.247 { 00:16:39.247 "name": "BaseBdev3", 00:16:39.247 "uuid": "632c4503-57ca-4d1d-bc29-1f3e74c400f3", 00:16:39.247 "is_configured": true, 00:16:39.247 "data_offset": 2048, 00:16:39.247 "data_size": 63488 00:16:39.247 } 00:16:39.247 ] 00:16:39.247 }' 00:16:39.247 13:01:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.247 13:01:57 -- common/autotest_common.sh@10 -- # set +x 00:16:39.814 13:01:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:40.072 [2024-06-11 13:01:58.816305] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.072 [2024-06-11 13:01:58.816498] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.072 [2024-06-11 13:01:58.816700] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:40.072 13:01:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.073 13:01:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:40.073 13:01:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.073 13:01:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.073 13:01:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.073 13:01:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.073 13:01:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.073 13:01:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.331 13:01:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.331 "name": "Existed_Raid", 00:16:40.331 "uuid": "45843114-4f0d-49c8-831b-0197e5f09c95", 00:16:40.331 "strip_size_kb": 64, 00:16:40.331 "state": "offline", 00:16:40.331 "raid_level": "concat", 00:16:40.331 "superblock": true, 00:16:40.332 "num_base_bdevs": 3, 00:16:40.332 "num_base_bdevs_discovered": 2, 00:16:40.332 "num_base_bdevs_operational": 2, 00:16:40.332 "base_bdevs_list": [ 00:16:40.332 { 00:16:40.332 "name": null, 00:16:40.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.332 "is_configured": false, 00:16:40.332 "data_offset": 2048, 00:16:40.332 "data_size": 63488 00:16:40.332 }, 00:16:40.332 { 00:16:40.332 "name": "BaseBdev2", 00:16:40.332 "uuid": "52a6556f-b5d1-4339-bc53-c85f5e36f116", 00:16:40.332 "is_configured": true, 00:16:40.332 "data_offset": 2048, 00:16:40.332 "data_size": 63488 00:16:40.332 }, 00:16:40.332 { 00:16:40.332 "name": "BaseBdev3", 00:16:40.332 "uuid": "632c4503-57ca-4d1d-bc29-1f3e74c400f3", 00:16:40.332 "is_configured": true, 00:16:40.332 "data_offset": 2048, 00:16:40.332 "data_size": 63488 00:16:40.332 } 00:16:40.332 ] 00:16:40.332 }' 00:16:40.332 13:01:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.332 13:01:59 -- common/autotest_common.sh@10 -- # set +x 00:16:41.272 13:01:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:41.272 13:01:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:41.272 13:01:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.272 13:01:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:41.272 13:02:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:41.272 13:02:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.272 13:02:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:41.566 [2024-06-11 13:02:00.192870] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.566 13:02:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:41.566 13:02:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:41.566 13:02:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.566 13:02:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:41.836 13:02:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:41.836 13:02:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.836 13:02:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:42.094 [2024-06-11 13:02:00.765778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:42.094 [2024-06-11 13:02:00.767166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:42.094 13:02:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:42.094 13:02:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:42.094 13:02:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.094 13:02:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:42.352 13:02:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:42.352 13:02:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:42.352 13:02:01 -- bdev/bdev_raid.sh@287 -- # killprocess 119227 00:16:42.352 13:02:01 -- common/autotest_common.sh@926 -- # '[' -z 119227 ']' 00:16:42.352 13:02:01 -- common/autotest_common.sh@930 -- # kill -0 119227 00:16:42.352 13:02:01 -- common/autotest_common.sh@931 -- # uname 00:16:42.352 13:02:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:42.352 13:02:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119227 00:16:42.352 killing process with pid 119227 00:16:42.352 13:02:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:42.352 13:02:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:42.352 13:02:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119227' 00:16:42.352 13:02:01 -- common/autotest_common.sh@945 -- # kill 119227 00:16:42.352 13:02:01 -- common/autotest_common.sh@950 -- # wait 119227 00:16:42.352 [2024-06-11 13:02:01.118016] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.352 [2024-06-11 13:02:01.118190] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.287 ************************************ 00:16:43.287 END TEST raid_state_function_test_sb 00:16:43.287 ************************************ 00:16:43.287 13:02:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:43.287 00:16:43.287 real 0m13.161s 00:16:43.287 user 0m23.562s 00:16:43.287 sys 0m1.408s 00:16:43.287 13:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.287 13:02:02 -- common/autotest_common.sh@10 -- # set +x 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:43.544 13:02:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:43.544 13:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:43.544 13:02:02 -- common/autotest_common.sh@10 -- # set +x 00:16:43.544 ************************************ 00:16:43.544 START TEST raid_superblock_test 00:16:43.544 ************************************ 00:16:43.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:43.544 13:02:02 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=119658 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119658 /var/tmp/spdk-raid.sock 00:16:43.544 13:02:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:43.544 13:02:02 -- common/autotest_common.sh@819 -- # '[' -z 119658 ']' 00:16:43.544 13:02:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:43.544 13:02:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:43.544 13:02:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:43.544 13:02:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:43.544 13:02:02 -- common/autotest_common.sh@10 -- # set +x 00:16:43.544 [2024-06-11 13:02:02.201763] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:43.544 [2024-06-11 13:02:02.202136] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119658 ] 00:16:43.544 [2024-06-11 13:02:02.354026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.801 [2024-06-11 13:02:02.524569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.059 [2024-06-11 13:02:02.700190] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.317 13:02:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:44.317 13:02:03 -- common/autotest_common.sh@852 -- # return 0 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:44.317 13:02:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:44.575 malloc1 00:16:44.575 13:02:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:44.834 [2024-06-11 13:02:03.549759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:44.834 [2024-06-11 13:02:03.550234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.834 [2024-06-11 13:02:03.550439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:44.834 [2024-06-11 13:02:03.550666] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.834 [2024-06-11 13:02:03.554372] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.834 [2024-06-11 13:02:03.554601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:44.834 pt1 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:44.834 13:02:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:45.091 malloc2 00:16:45.091 13:02:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.349 [2024-06-11 13:02:03.994187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.349 [2024-06-11 13:02:03.994444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.349 [2024-06-11 13:02:03.994528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:45.349 [2024-06-11 13:02:03.994881] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.349 [2024-06-11 13:02:03.997310] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.349 [2024-06-11 13:02:03.997512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.349 pt2 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:45.349 13:02:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:45.607 malloc3 00:16:45.607 13:02:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:45.607 [2024-06-11 13:02:04.411505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:45.608 [2024-06-11 13:02:04.411788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.608 [2024-06-11 13:02:04.411868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:45.608 [2024-06-11 13:02:04.412182] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.608 [2024-06-11 13:02:04.414541] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.608 [2024-06-11 13:02:04.414739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:45.608 pt3 00:16:45.608 13:02:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:45.608 13:02:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:45.608 13:02:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:45.865 [2024-06-11 13:02:04.615573] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:45.865 [2024-06-11 13:02:04.617839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.865 [2024-06-11 13:02:04.618050] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:45.866 [2024-06-11 13:02:04.618305] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:45.866 [2024-06-11 13:02:04.618457] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:45.866 [2024-06-11 13:02:04.618626] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:45.866 [2024-06-11 13:02:04.619046] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:45.866 [2024-06-11 13:02:04.619179] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:45.866 [2024-06-11 13:02:04.619406] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.866 13:02:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.123 13:02:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.123 "name": "raid_bdev1", 00:16:46.123 "uuid": "d810cc4c-ba8b-4b59-a773-67a048bed093", 00:16:46.123 "strip_size_kb": 64, 00:16:46.123 "state": "online", 00:16:46.123 "raid_level": "concat", 00:16:46.123 "superblock": true, 00:16:46.123 "num_base_bdevs": 3, 00:16:46.123 "num_base_bdevs_discovered": 3, 00:16:46.123 "num_base_bdevs_operational": 3, 00:16:46.123 "base_bdevs_list": [ 00:16:46.123 { 00:16:46.123 "name": "pt1", 00:16:46.123 "uuid": "382b78d0-e19b-544b-a3b7-6d39b3abcc64", 00:16:46.123 "is_configured": true, 00:16:46.123 "data_offset": 2048, 00:16:46.123 "data_size": 63488 00:16:46.123 }, 00:16:46.123 { 00:16:46.123 "name": "pt2", 00:16:46.123 "uuid": "c5942431-21fc-5bee-ac32-33ce60a5686d", 00:16:46.123 "is_configured": true, 00:16:46.123 "data_offset": 2048, 00:16:46.123 "data_size": 63488 00:16:46.123 }, 00:16:46.123 { 00:16:46.123 "name": "pt3", 00:16:46.123 "uuid": "6defd2c9-dd63-583a-863a-570ec85a1ca9", 00:16:46.123 "is_configured": true, 00:16:46.123 "data_offset": 2048, 00:16:46.123 "data_size": 63488 00:16:46.123 } 00:16:46.123 ] 00:16:46.123 }' 00:16:46.123 13:02:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.123 13:02:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.690 13:02:05 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:46.690 13:02:05 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:46.948 [2024-06-11 13:02:05.636034] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.948 13:02:05 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d810cc4c-ba8b-4b59-a773-67a048bed093 00:16:46.948 13:02:05 -- bdev/bdev_raid.sh@380 -- # '[' -z d810cc4c-ba8b-4b59-a773-67a048bed093 ']' 00:16:46.948 13:02:05 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:47.206 [2024-06-11 13:02:05.879909] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.206 [2024-06-11 13:02:05.880095] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.206 [2024-06-11 13:02:05.880292] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.206 [2024-06-11 13:02:05.880484] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.206 [2024-06-11 13:02:05.880623] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:47.206 13:02:05 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.206 13:02:05 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:47.464 13:02:06 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:47.464 13:02:06 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:47.464 13:02:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:47.464 13:02:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:47.722 13:02:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:47.722 13:02:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:47.722 13:02:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:47.722 13:02:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:47.981 13:02:06 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:47.981 13:02:06 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:48.240 13:02:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:48.240 13:02:07 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:48.240 13:02:07 -- common/autotest_common.sh@640 -- # local es=0 00:16:48.240 13:02:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:48.240 13:02:07 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.240 13:02:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:48.240 13:02:07 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.240 13:02:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:48.240 13:02:07 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.240 13:02:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:48.240 13:02:07 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.240 13:02:07 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:48.240 13:02:07 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:48.498 [2024-06-11 13:02:07.248175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:48.498 [2024-06-11 13:02:07.250139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:48.498 [2024-06-11 13:02:07.250344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:48.498 [2024-06-11 13:02:07.250452] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:48.498 [2024-06-11 13:02:07.250714] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:48.498 [2024-06-11 13:02:07.250892] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:48.498 [2024-06-11 13:02:07.251046] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.498 [2024-06-11 13:02:07.251152] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:16:48.498 request: 00:16:48.498 { 00:16:48.498 "name": "raid_bdev1", 00:16:48.498 "raid_level": "concat", 00:16:48.498 "base_bdevs": [ 00:16:48.498 "malloc1", 00:16:48.498 "malloc2", 00:16:48.498 "malloc3" 00:16:48.498 ], 00:16:48.498 "superblock": false, 00:16:48.498 "strip_size_kb": 64, 00:16:48.498 "method": "bdev_raid_create", 00:16:48.498 "req_id": 1 00:16:48.498 } 00:16:48.498 Got JSON-RPC error response 00:16:48.498 response: 00:16:48.498 { 00:16:48.498 "code": -17, 00:16:48.498 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:48.498 } 00:16:48.498 13:02:07 -- common/autotest_common.sh@643 -- # es=1 00:16:48.498 13:02:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:48.498 13:02:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:48.498 13:02:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:48.498 13:02:07 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.498 13:02:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:48.757 13:02:07 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:48.757 13:02:07 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:48.757 13:02:07 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.015 [2024-06-11 13:02:07.636152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.015 [2024-06-11 13:02:07.636383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.015 [2024-06-11 13:02:07.636467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:49.015 [2024-06-11 13:02:07.636729] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.015 [2024-06-11 13:02:07.639125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.015 [2024-06-11 13:02:07.639289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.015 [2024-06-11 13:02:07.639510] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:49.015 [2024-06-11 13:02:07.639689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.015 pt1 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.015 13:02:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.015 "name": "raid_bdev1", 00:16:49.015 "uuid": "d810cc4c-ba8b-4b59-a773-67a048bed093", 00:16:49.015 "strip_size_kb": 64, 00:16:49.015 "state": "configuring", 00:16:49.015 "raid_level": "concat", 00:16:49.015 "superblock": true, 00:16:49.015 "num_base_bdevs": 3, 00:16:49.015 "num_base_bdevs_discovered": 1, 00:16:49.015 "num_base_bdevs_operational": 3, 00:16:49.015 "base_bdevs_list": [ 00:16:49.015 { 00:16:49.015 "name": "pt1", 00:16:49.015 "uuid": "382b78d0-e19b-544b-a3b7-6d39b3abcc64", 00:16:49.015 "is_configured": true, 00:16:49.015 "data_offset": 2048, 00:16:49.015 "data_size": 63488 00:16:49.015 }, 00:16:49.015 { 00:16:49.015 "name": null, 00:16:49.015 "uuid": "c5942431-21fc-5bee-ac32-33ce60a5686d", 00:16:49.015 "is_configured": false, 00:16:49.015 "data_offset": 2048, 00:16:49.015 "data_size": 63488 00:16:49.015 }, 00:16:49.015 { 00:16:49.015 "name": null, 00:16:49.015 "uuid": "6defd2c9-dd63-583a-863a-570ec85a1ca9", 00:16:49.015 "is_configured": false, 00:16:49.015 "data_offset": 2048, 00:16:49.016 "data_size": 63488 00:16:49.016 } 00:16:49.016 ] 00:16:49.016 }' 00:16:49.016 13:02:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.016 13:02:07 -- common/autotest_common.sh@10 -- # set +x 00:16:49.951 13:02:08 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:49.951 13:02:08 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.951 [2024-06-11 13:02:08.676387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.951 [2024-06-11 13:02:08.676710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.951 [2024-06-11 13:02:08.676884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:49.951 [2024-06-11 13:02:08.677053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.951 [2024-06-11 13:02:08.677779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.951 [2024-06-11 13:02:08.677952] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.951 [2024-06-11 13:02:08.678187] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:49.951 [2024-06-11 13:02:08.678315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.951 pt2 00:16:49.951 13:02:08 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:50.210 [2024-06-11 13:02:08.916478] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.210 13:02:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.469 13:02:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.469 "name": "raid_bdev1", 00:16:50.469 "uuid": "d810cc4c-ba8b-4b59-a773-67a048bed093", 00:16:50.469 "strip_size_kb": 64, 00:16:50.469 "state": "configuring", 00:16:50.469 "raid_level": "concat", 00:16:50.469 "superblock": true, 00:16:50.469 "num_base_bdevs": 3, 00:16:50.469 "num_base_bdevs_discovered": 1, 00:16:50.469 "num_base_bdevs_operational": 3, 00:16:50.469 "base_bdevs_list": [ 00:16:50.469 { 00:16:50.469 "name": "pt1", 00:16:50.469 "uuid": "382b78d0-e19b-544b-a3b7-6d39b3abcc64", 00:16:50.469 "is_configured": true, 00:16:50.469 "data_offset": 2048, 00:16:50.469 "data_size": 63488 00:16:50.469 }, 00:16:50.469 { 00:16:50.469 "name": null, 00:16:50.469 "uuid": "c5942431-21fc-5bee-ac32-33ce60a5686d", 00:16:50.469 "is_configured": false, 00:16:50.469 "data_offset": 2048, 00:16:50.469 "data_size": 63488 00:16:50.469 }, 00:16:50.469 { 00:16:50.469 "name": null, 00:16:50.469 "uuid": "6defd2c9-dd63-583a-863a-570ec85a1ca9", 00:16:50.469 "is_configured": false, 00:16:50.469 "data_offset": 2048, 00:16:50.469 "data_size": 63488 00:16:50.469 } 00:16:50.470 ] 00:16:50.470 }' 00:16:50.470 13:02:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.470 13:02:09 -- common/autotest_common.sh@10 -- # set +x 00:16:51.036 13:02:09 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:51.036 13:02:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:51.036 13:02:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.294 [2024-06-11 13:02:10.036670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.294 [2024-06-11 13:02:10.036916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.294 [2024-06-11 13:02:10.037099] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:51.294 [2024-06-11 13:02:10.037249] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.294 [2024-06-11 13:02:10.037846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.294 [2024-06-11 13:02:10.038015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.294 [2024-06-11 13:02:10.038285] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:51.294 [2024-06-11 13:02:10.038427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.294 pt2 00:16:51.294 13:02:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:51.294 13:02:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:51.294 13:02:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:51.552 [2024-06-11 13:02:10.212692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:51.552 [2024-06-11 13:02:10.212890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.552 [2024-06-11 13:02:10.212961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:51.552 [2024-06-11 13:02:10.213090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.552 [2024-06-11 13:02:10.213547] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.552 [2024-06-11 13:02:10.213711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:51.552 [2024-06-11 13:02:10.213984] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:51.552 [2024-06-11 13:02:10.214103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:51.552 [2024-06-11 13:02:10.214315] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:51.552 [2024-06-11 13:02:10.214413] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:51.552 [2024-06-11 13:02:10.214567] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:51.552 [2024-06-11 13:02:10.214965] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:51.552 [2024-06-11 13:02:10.215097] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:51.552 [2024-06-11 13:02:10.215315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.552 pt3 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.552 13:02:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.809 13:02:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.809 "name": "raid_bdev1", 00:16:51.809 "uuid": "d810cc4c-ba8b-4b59-a773-67a048bed093", 00:16:51.809 "strip_size_kb": 64, 00:16:51.809 "state": "online", 00:16:51.809 "raid_level": "concat", 00:16:51.809 "superblock": true, 00:16:51.809 "num_base_bdevs": 3, 00:16:51.809 "num_base_bdevs_discovered": 3, 00:16:51.809 "num_base_bdevs_operational": 3, 00:16:51.809 "base_bdevs_list": [ 00:16:51.809 { 00:16:51.809 "name": "pt1", 00:16:51.809 "uuid": "382b78d0-e19b-544b-a3b7-6d39b3abcc64", 00:16:51.809 "is_configured": true, 00:16:51.809 "data_offset": 2048, 00:16:51.809 "data_size": 63488 00:16:51.809 }, 00:16:51.809 { 00:16:51.809 "name": "pt2", 00:16:51.809 "uuid": "c5942431-21fc-5bee-ac32-33ce60a5686d", 00:16:51.809 "is_configured": true, 00:16:51.809 "data_offset": 2048, 00:16:51.809 "data_size": 63488 00:16:51.809 }, 00:16:51.809 { 00:16:51.809 "name": "pt3", 00:16:51.809 "uuid": "6defd2c9-dd63-583a-863a-570ec85a1ca9", 00:16:51.809 "is_configured": true, 00:16:51.809 "data_offset": 2048, 00:16:51.809 "data_size": 63488 00:16:51.809 } 00:16:51.809 ] 00:16:51.809 }' 00:16:51.809 13:02:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.809 13:02:10 -- common/autotest_common.sh@10 -- # set +x 00:16:52.374 13:02:11 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:52.375 13:02:11 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:52.633 [2024-06-11 13:02:11.441259] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.633 13:02:11 -- bdev/bdev_raid.sh@430 -- # '[' d810cc4c-ba8b-4b59-a773-67a048bed093 '!=' d810cc4c-ba8b-4b59-a773-67a048bed093 ']' 00:16:52.633 13:02:11 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:52.633 13:02:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:52.633 13:02:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:52.633 13:02:11 -- bdev/bdev_raid.sh@511 -- # killprocess 119658 00:16:52.633 13:02:11 -- common/autotest_common.sh@926 -- # '[' -z 119658 ']' 00:16:52.633 13:02:11 -- common/autotest_common.sh@930 -- # kill -0 119658 00:16:52.633 13:02:11 -- common/autotest_common.sh@931 -- # uname 00:16:52.633 13:02:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.633 13:02:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119658 00:16:52.891 13:02:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:52.891 13:02:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:52.891 13:02:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119658' 00:16:52.891 killing process with pid 119658 00:16:52.891 13:02:11 -- common/autotest_common.sh@945 -- # kill 119658 00:16:52.891 13:02:11 -- common/autotest_common.sh@950 -- # wait 119658 00:16:52.891 [2024-06-11 13:02:11.482223] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.891 [2024-06-11 13:02:11.482305] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.891 [2024-06-11 13:02:11.482371] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.891 [2024-06-11 13:02:11.482383] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:52.891 [2024-06-11 13:02:11.690861] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:54.262 00:16:54.262 real 0m10.573s 00:16:54.262 user 0m18.551s 00:16:54.262 sys 0m1.213s 00:16:54.262 13:02:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.262 13:02:12 -- common/autotest_common.sh@10 -- # set +x 00:16:54.262 ************************************ 00:16:54.262 END TEST raid_superblock_test 00:16:54.262 ************************************ 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:54.262 13:02:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:54.262 13:02:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:54.262 13:02:12 -- common/autotest_common.sh@10 -- # set +x 00:16:54.262 ************************************ 00:16:54.262 START TEST raid_state_function_test 00:16:54.262 ************************************ 00:16:54.262 13:02:12 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:54.262 Process raid pid: 119986 00:16:54.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=119986 00:16:54.262 13:02:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119986' 00:16:54.263 13:02:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119986 /var/tmp/spdk-raid.sock 00:16:54.263 13:02:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:54.263 13:02:12 -- common/autotest_common.sh@819 -- # '[' -z 119986 ']' 00:16:54.263 13:02:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.263 13:02:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.263 13:02:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.263 13:02:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.263 13:02:12 -- common/autotest_common.sh@10 -- # set +x 00:16:54.263 [2024-06-11 13:02:12.848135] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:54.263 [2024-06-11 13:02:12.848554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.263 [2024-06-11 13:02:13.006934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.521 [2024-06-11 13:02:13.215520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.779 [2024-06-11 13:02:13.406716] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.038 13:02:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.038 13:02:13 -- common/autotest_common.sh@852 -- # return 0 00:16:55.038 13:02:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:55.296 [2024-06-11 13:02:14.020392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.296 [2024-06-11 13:02:14.020677] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.296 [2024-06-11 13:02:14.020809] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.296 [2024-06-11 13:02:14.020866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.296 [2024-06-11 13:02:14.020956] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.296 [2024-06-11 13:02:14.021035] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.296 13:02:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.565 13:02:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.566 "name": "Existed_Raid", 00:16:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.566 "strip_size_kb": 0, 00:16:55.566 "state": "configuring", 00:16:55.566 "raid_level": "raid1", 00:16:55.566 "superblock": false, 00:16:55.566 "num_base_bdevs": 3, 00:16:55.566 "num_base_bdevs_discovered": 0, 00:16:55.566 "num_base_bdevs_operational": 3, 00:16:55.566 "base_bdevs_list": [ 00:16:55.566 { 00:16:55.566 "name": "BaseBdev1", 00:16:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.566 "is_configured": false, 00:16:55.566 "data_offset": 0, 00:16:55.566 "data_size": 0 00:16:55.566 }, 00:16:55.566 { 00:16:55.566 "name": "BaseBdev2", 00:16:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.566 "is_configured": false, 00:16:55.566 "data_offset": 0, 00:16:55.566 "data_size": 0 00:16:55.566 }, 00:16:55.566 { 00:16:55.566 "name": "BaseBdev3", 00:16:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.566 "is_configured": false, 00:16:55.566 "data_offset": 0, 00:16:55.566 "data_size": 0 00:16:55.566 } 00:16:55.566 ] 00:16:55.566 }' 00:16:55.566 13:02:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.566 13:02:14 -- common/autotest_common.sh@10 -- # set +x 00:16:56.145 13:02:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.403 [2024-06-11 13:02:15.136536] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.403 [2024-06-11 13:02:15.138375] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:56.403 13:02:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:56.661 [2024-06-11 13:02:15.320576] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.661 [2024-06-11 13:02:15.320762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.661 [2024-06-11 13:02:15.320873] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.661 [2024-06-11 13:02:15.320928] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.661 [2024-06-11 13:02:15.321146] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.661 [2024-06-11 13:02:15.321232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.661 13:02:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.920 [2024-06-11 13:02:15.537810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.920 BaseBdev1 00:16:56.920 13:02:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:56.920 13:02:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:56.920 13:02:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:56.920 13:02:15 -- common/autotest_common.sh@889 -- # local i 00:16:56.920 13:02:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:56.920 13:02:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:56.920 13:02:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.920 13:02:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:57.179 [ 00:16:57.179 { 00:16:57.179 "name": "BaseBdev1", 00:16:57.179 "aliases": [ 00:16:57.179 "35d30038-6910-479f-be85-ec1d425b02c7" 00:16:57.179 ], 00:16:57.179 "product_name": "Malloc disk", 00:16:57.179 "block_size": 512, 00:16:57.179 "num_blocks": 65536, 00:16:57.179 "uuid": "35d30038-6910-479f-be85-ec1d425b02c7", 00:16:57.179 "assigned_rate_limits": { 00:16:57.179 "rw_ios_per_sec": 0, 00:16:57.179 "rw_mbytes_per_sec": 0, 00:16:57.179 "r_mbytes_per_sec": 0, 00:16:57.179 "w_mbytes_per_sec": 0 00:16:57.179 }, 00:16:57.179 "claimed": true, 00:16:57.179 "claim_type": "exclusive_write", 00:16:57.179 "zoned": false, 00:16:57.179 "supported_io_types": { 00:16:57.179 "read": true, 00:16:57.179 "write": true, 00:16:57.179 "unmap": true, 00:16:57.179 "write_zeroes": true, 00:16:57.179 "flush": true, 00:16:57.179 "reset": true, 00:16:57.179 "compare": false, 00:16:57.179 "compare_and_write": false, 00:16:57.179 "abort": true, 00:16:57.179 "nvme_admin": false, 00:16:57.179 "nvme_io": false 00:16:57.179 }, 00:16:57.179 "memory_domains": [ 00:16:57.179 { 00:16:57.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.179 "dma_device_type": 2 00:16:57.179 } 00:16:57.179 ], 00:16:57.179 "driver_specific": {} 00:16:57.179 } 00:16:57.179 ] 00:16:57.179 13:02:15 -- common/autotest_common.sh@895 -- # return 0 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.179 13:02:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.437 13:02:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.437 "name": "Existed_Raid", 00:16:57.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.438 "strip_size_kb": 0, 00:16:57.438 "state": "configuring", 00:16:57.438 "raid_level": "raid1", 00:16:57.438 "superblock": false, 00:16:57.438 "num_base_bdevs": 3, 00:16:57.438 "num_base_bdevs_discovered": 1, 00:16:57.438 "num_base_bdevs_operational": 3, 00:16:57.438 "base_bdevs_list": [ 00:16:57.438 { 00:16:57.438 "name": "BaseBdev1", 00:16:57.438 "uuid": "35d30038-6910-479f-be85-ec1d425b02c7", 00:16:57.438 "is_configured": true, 00:16:57.438 "data_offset": 0, 00:16:57.438 "data_size": 65536 00:16:57.438 }, 00:16:57.438 { 00:16:57.438 "name": "BaseBdev2", 00:16:57.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.438 "is_configured": false, 00:16:57.438 "data_offset": 0, 00:16:57.438 "data_size": 0 00:16:57.438 }, 00:16:57.438 { 00:16:57.438 "name": "BaseBdev3", 00:16:57.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.438 "is_configured": false, 00:16:57.438 "data_offset": 0, 00:16:57.438 "data_size": 0 00:16:57.438 } 00:16:57.438 ] 00:16:57.438 }' 00:16:57.438 13:02:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.438 13:02:16 -- common/autotest_common.sh@10 -- # set +x 00:16:58.004 13:02:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:58.263 [2024-06-11 13:02:16.990078] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.263 [2024-06-11 13:02:16.990286] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:58.263 13:02:17 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:58.263 13:02:17 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:58.522 [2024-06-11 13:02:17.186161] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.522 [2024-06-11 13:02:17.188436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.522 [2024-06-11 13:02:17.188626] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.522 [2024-06-11 13:02:17.188749] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:58.522 [2024-06-11 13:02:17.188814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.522 13:02:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.781 13:02:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.781 "name": "Existed_Raid", 00:16:58.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.781 "strip_size_kb": 0, 00:16:58.781 "state": "configuring", 00:16:58.781 "raid_level": "raid1", 00:16:58.781 "superblock": false, 00:16:58.781 "num_base_bdevs": 3, 00:16:58.781 "num_base_bdevs_discovered": 1, 00:16:58.781 "num_base_bdevs_operational": 3, 00:16:58.781 "base_bdevs_list": [ 00:16:58.781 { 00:16:58.781 "name": "BaseBdev1", 00:16:58.781 "uuid": "35d30038-6910-479f-be85-ec1d425b02c7", 00:16:58.781 "is_configured": true, 00:16:58.781 "data_offset": 0, 00:16:58.781 "data_size": 65536 00:16:58.781 }, 00:16:58.781 { 00:16:58.781 "name": "BaseBdev2", 00:16:58.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.781 "is_configured": false, 00:16:58.781 "data_offset": 0, 00:16:58.781 "data_size": 0 00:16:58.781 }, 00:16:58.781 { 00:16:58.781 "name": "BaseBdev3", 00:16:58.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.781 "is_configured": false, 00:16:58.781 "data_offset": 0, 00:16:58.781 "data_size": 0 00:16:58.781 } 00:16:58.781 ] 00:16:58.781 }' 00:16:58.781 13:02:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.781 13:02:17 -- common/autotest_common.sh@10 -- # set +x 00:16:59.348 13:02:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.607 [2024-06-11 13:02:18.329624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.607 BaseBdev2 00:16:59.607 13:02:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:59.607 13:02:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:59.607 13:02:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:59.607 13:02:18 -- common/autotest_common.sh@889 -- # local i 00:16:59.607 13:02:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:59.607 13:02:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:59.607 13:02:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.865 13:02:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:00.124 [ 00:17:00.124 { 00:17:00.124 "name": "BaseBdev2", 00:17:00.124 "aliases": [ 00:17:00.124 "5442139f-7b40-419e-af50-d5e26bc07aa6" 00:17:00.124 ], 00:17:00.124 "product_name": "Malloc disk", 00:17:00.124 "block_size": 512, 00:17:00.124 "num_blocks": 65536, 00:17:00.124 "uuid": "5442139f-7b40-419e-af50-d5e26bc07aa6", 00:17:00.124 "assigned_rate_limits": { 00:17:00.124 "rw_ios_per_sec": 0, 00:17:00.124 "rw_mbytes_per_sec": 0, 00:17:00.124 "r_mbytes_per_sec": 0, 00:17:00.124 "w_mbytes_per_sec": 0 00:17:00.124 }, 00:17:00.124 "claimed": true, 00:17:00.124 "claim_type": "exclusive_write", 00:17:00.124 "zoned": false, 00:17:00.124 "supported_io_types": { 00:17:00.124 "read": true, 00:17:00.124 "write": true, 00:17:00.124 "unmap": true, 00:17:00.124 "write_zeroes": true, 00:17:00.124 "flush": true, 00:17:00.124 "reset": true, 00:17:00.124 "compare": false, 00:17:00.124 "compare_and_write": false, 00:17:00.124 "abort": true, 00:17:00.124 "nvme_admin": false, 00:17:00.124 "nvme_io": false 00:17:00.124 }, 00:17:00.124 "memory_domains": [ 00:17:00.124 { 00:17:00.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.124 "dma_device_type": 2 00:17:00.124 } 00:17:00.124 ], 00:17:00.124 "driver_specific": {} 00:17:00.124 } 00:17:00.124 ] 00:17:00.124 13:02:18 -- common/autotest_common.sh@895 -- # return 0 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.124 13:02:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.382 13:02:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.382 "name": "Existed_Raid", 00:17:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.382 "strip_size_kb": 0, 00:17:00.382 "state": "configuring", 00:17:00.382 "raid_level": "raid1", 00:17:00.382 "superblock": false, 00:17:00.382 "num_base_bdevs": 3, 00:17:00.382 "num_base_bdevs_discovered": 2, 00:17:00.382 "num_base_bdevs_operational": 3, 00:17:00.382 "base_bdevs_list": [ 00:17:00.382 { 00:17:00.382 "name": "BaseBdev1", 00:17:00.382 "uuid": "35d30038-6910-479f-be85-ec1d425b02c7", 00:17:00.382 "is_configured": true, 00:17:00.382 "data_offset": 0, 00:17:00.382 "data_size": 65536 00:17:00.382 }, 00:17:00.382 { 00:17:00.382 "name": "BaseBdev2", 00:17:00.382 "uuid": "5442139f-7b40-419e-af50-d5e26bc07aa6", 00:17:00.382 "is_configured": true, 00:17:00.382 "data_offset": 0, 00:17:00.382 "data_size": 65536 00:17:00.382 }, 00:17:00.382 { 00:17:00.382 "name": "BaseBdev3", 00:17:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.382 "is_configured": false, 00:17:00.382 "data_offset": 0, 00:17:00.382 "data_size": 0 00:17:00.382 } 00:17:00.382 ] 00:17:00.382 }' 00:17:00.382 13:02:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.382 13:02:18 -- common/autotest_common.sh@10 -- # set +x 00:17:00.950 13:02:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:01.208 [2024-06-11 13:02:19.853269] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.208 [2024-06-11 13:02:19.853545] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:01.209 [2024-06-11 13:02:19.853603] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:01.209 [2024-06-11 13:02:19.853891] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:01.209 [2024-06-11 13:02:19.854376] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:01.209 [2024-06-11 13:02:19.854506] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:01.209 [2024-06-11 13:02:19.854871] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.209 BaseBdev3 00:17:01.209 13:02:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:01.209 13:02:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:01.209 13:02:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:01.209 13:02:19 -- common/autotest_common.sh@889 -- # local i 00:17:01.209 13:02:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:01.209 13:02:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:01.209 13:02:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:01.467 13:02:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:01.467 [ 00:17:01.467 { 00:17:01.467 "name": "BaseBdev3", 00:17:01.467 "aliases": [ 00:17:01.467 "90afe821-3416-47f2-9d71-47c3d289c3a2" 00:17:01.467 ], 00:17:01.467 "product_name": "Malloc disk", 00:17:01.467 "block_size": 512, 00:17:01.467 "num_blocks": 65536, 00:17:01.467 "uuid": "90afe821-3416-47f2-9d71-47c3d289c3a2", 00:17:01.467 "assigned_rate_limits": { 00:17:01.467 "rw_ios_per_sec": 0, 00:17:01.467 "rw_mbytes_per_sec": 0, 00:17:01.467 "r_mbytes_per_sec": 0, 00:17:01.467 "w_mbytes_per_sec": 0 00:17:01.467 }, 00:17:01.467 "claimed": true, 00:17:01.467 "claim_type": "exclusive_write", 00:17:01.467 "zoned": false, 00:17:01.467 "supported_io_types": { 00:17:01.467 "read": true, 00:17:01.467 "write": true, 00:17:01.467 "unmap": true, 00:17:01.467 "write_zeroes": true, 00:17:01.467 "flush": true, 00:17:01.467 "reset": true, 00:17:01.467 "compare": false, 00:17:01.467 "compare_and_write": false, 00:17:01.467 "abort": true, 00:17:01.467 "nvme_admin": false, 00:17:01.467 "nvme_io": false 00:17:01.467 }, 00:17:01.467 "memory_domains": [ 00:17:01.467 { 00:17:01.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.467 "dma_device_type": 2 00:17:01.467 } 00:17:01.467 ], 00:17:01.467 "driver_specific": {} 00:17:01.467 } 00:17:01.467 ] 00:17:01.467 13:02:20 -- common/autotest_common.sh@895 -- # return 0 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.467 13:02:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.725 13:02:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.725 "name": "Existed_Raid", 00:17:01.725 "uuid": "796dba44-55fc-4e3e-8605-127e23364a87", 00:17:01.725 "strip_size_kb": 0, 00:17:01.725 "state": "online", 00:17:01.725 "raid_level": "raid1", 00:17:01.725 "superblock": false, 00:17:01.725 "num_base_bdevs": 3, 00:17:01.725 "num_base_bdevs_discovered": 3, 00:17:01.725 "num_base_bdevs_operational": 3, 00:17:01.725 "base_bdevs_list": [ 00:17:01.725 { 00:17:01.725 "name": "BaseBdev1", 00:17:01.725 "uuid": "35d30038-6910-479f-be85-ec1d425b02c7", 00:17:01.725 "is_configured": true, 00:17:01.725 "data_offset": 0, 00:17:01.725 "data_size": 65536 00:17:01.725 }, 00:17:01.725 { 00:17:01.725 "name": "BaseBdev2", 00:17:01.725 "uuid": "5442139f-7b40-419e-af50-d5e26bc07aa6", 00:17:01.725 "is_configured": true, 00:17:01.725 "data_offset": 0, 00:17:01.725 "data_size": 65536 00:17:01.725 }, 00:17:01.725 { 00:17:01.725 "name": "BaseBdev3", 00:17:01.725 "uuid": "90afe821-3416-47f2-9d71-47c3d289c3a2", 00:17:01.725 "is_configured": true, 00:17:01.725 "data_offset": 0, 00:17:01.725 "data_size": 65536 00:17:01.725 } 00:17:01.725 ] 00:17:01.725 }' 00:17:01.725 13:02:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.725 13:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:02.659 [2024-06-11 13:02:21.401788] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.659 13:02:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.918 13:02:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.918 "name": "Existed_Raid", 00:17:02.918 "uuid": "796dba44-55fc-4e3e-8605-127e23364a87", 00:17:02.918 "strip_size_kb": 0, 00:17:02.918 "state": "online", 00:17:02.918 "raid_level": "raid1", 00:17:02.918 "superblock": false, 00:17:02.918 "num_base_bdevs": 3, 00:17:02.918 "num_base_bdevs_discovered": 2, 00:17:02.918 "num_base_bdevs_operational": 2, 00:17:02.918 "base_bdevs_list": [ 00:17:02.918 { 00:17:02.918 "name": null, 00:17:02.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.919 "is_configured": false, 00:17:02.919 "data_offset": 0, 00:17:02.919 "data_size": 65536 00:17:02.919 }, 00:17:02.919 { 00:17:02.919 "name": "BaseBdev2", 00:17:02.919 "uuid": "5442139f-7b40-419e-af50-d5e26bc07aa6", 00:17:02.919 "is_configured": true, 00:17:02.919 "data_offset": 0, 00:17:02.919 "data_size": 65536 00:17:02.919 }, 00:17:02.919 { 00:17:02.919 "name": "BaseBdev3", 00:17:02.919 "uuid": "90afe821-3416-47f2-9d71-47c3d289c3a2", 00:17:02.919 "is_configured": true, 00:17:02.919 "data_offset": 0, 00:17:02.919 "data_size": 65536 00:17:02.919 } 00:17:02.919 ] 00:17:02.919 }' 00:17:02.919 13:02:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.919 13:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:03.486 13:02:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:03.486 13:02:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:03.486 13:02:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.486 13:02:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:03.745 13:02:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:03.745 13:02:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.745 13:02:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:04.004 [2024-06-11 13:02:22.696115] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.004 13:02:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:04.004 13:02:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:04.004 13:02:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.004 13:02:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:04.263 13:02:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:04.263 13:02:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.263 13:02:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:04.522 [2024-06-11 13:02:23.257587] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:04.522 [2024-06-11 13:02:23.257814] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.522 [2024-06-11 13:02:23.258024] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.522 [2024-06-11 13:02:23.326052] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.522 [2024-06-11 13:02:23.326216] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:04.522 13:02:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:04.522 13:02:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:04.522 13:02:23 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.522 13:02:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.781 13:02:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:04.781 13:02:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:04.781 13:02:23 -- bdev/bdev_raid.sh@287 -- # killprocess 119986 00:17:04.781 13:02:23 -- common/autotest_common.sh@926 -- # '[' -z 119986 ']' 00:17:04.781 13:02:23 -- common/autotest_common.sh@930 -- # kill -0 119986 00:17:04.781 13:02:23 -- common/autotest_common.sh@931 -- # uname 00:17:04.781 13:02:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.781 13:02:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119986 00:17:04.781 killing process with pid 119986 00:17:04.781 13:02:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:04.781 13:02:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:04.781 13:02:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119986' 00:17:04.781 13:02:23 -- common/autotest_common.sh@945 -- # kill 119986 00:17:04.781 13:02:23 -- common/autotest_common.sh@950 -- # wait 119986 00:17:04.781 [2024-06-11 13:02:23.550611] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.781 [2024-06-11 13:02:23.550742] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.717 ************************************ 00:17:05.717 END TEST raid_state_function_test 00:17:05.717 ************************************ 00:17:05.717 13:02:24 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:05.717 00:17:05.717 real 0m11.731s 00:17:05.717 user 0m20.894s 00:17:05.717 sys 0m1.294s 00:17:05.717 13:02:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.717 13:02:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.717 13:02:24 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:05.717 13:02:24 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:05.717 13:02:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:05.717 13:02:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.976 ************************************ 00:17:05.976 START TEST raid_state_function_test_sb 00:17:05.976 ************************************ 00:17:05.976 13:02:24 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=120372 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120372' 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:05.976 Process raid pid: 120372 00:17:05.976 13:02:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120372 /var/tmp/spdk-raid.sock 00:17:05.976 13:02:24 -- common/autotest_common.sh@819 -- # '[' -z 120372 ']' 00:17:05.976 13:02:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:05.976 13:02:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.976 13:02:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:05.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:05.976 13:02:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.976 13:02:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.976 [2024-06-11 13:02:24.631686] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:05.976 [2024-06-11 13:02:24.632075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.976 [2024-06-11 13:02:24.798638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.235 [2024-06-11 13:02:24.979222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.493 [2024-06-11 13:02:25.156893] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.751 13:02:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.751 13:02:25 -- common/autotest_common.sh@852 -- # return 0 00:17:06.751 13:02:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:07.009 [2024-06-11 13:02:25.799003] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.009 [2024-06-11 13:02:25.799260] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.009 [2024-06-11 13:02:25.799367] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.010 [2024-06-11 13:02:25.799424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.010 [2024-06-11 13:02:25.799514] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.010 [2024-06-11 13:02:25.799676] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.010 13:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.267 13:02:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.267 "name": "Existed_Raid", 00:17:07.267 "uuid": "abea43f3-c944-4400-8a6f-53abad637b94", 00:17:07.267 "strip_size_kb": 0, 00:17:07.267 "state": "configuring", 00:17:07.268 "raid_level": "raid1", 00:17:07.268 "superblock": true, 00:17:07.268 "num_base_bdevs": 3, 00:17:07.268 "num_base_bdevs_discovered": 0, 00:17:07.268 "num_base_bdevs_operational": 3, 00:17:07.268 "base_bdevs_list": [ 00:17:07.268 { 00:17:07.268 "name": "BaseBdev1", 00:17:07.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.268 "is_configured": false, 00:17:07.268 "data_offset": 0, 00:17:07.268 "data_size": 0 00:17:07.268 }, 00:17:07.268 { 00:17:07.268 "name": "BaseBdev2", 00:17:07.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.268 "is_configured": false, 00:17:07.268 "data_offset": 0, 00:17:07.268 "data_size": 0 00:17:07.268 }, 00:17:07.268 { 00:17:07.268 "name": "BaseBdev3", 00:17:07.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.268 "is_configured": false, 00:17:07.268 "data_offset": 0, 00:17:07.268 "data_size": 0 00:17:07.268 } 00:17:07.268 ] 00:17:07.268 }' 00:17:07.268 13:02:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.268 13:02:26 -- common/autotest_common.sh@10 -- # set +x 00:17:07.915 13:02:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.174 [2024-06-11 13:02:26.907092] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.174 [2024-06-11 13:02:26.907258] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:08.174 13:02:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:08.433 [2024-06-11 13:02:27.151186] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:08.433 [2024-06-11 13:02:27.151378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:08.433 [2024-06-11 13:02:27.151492] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.433 [2024-06-11 13:02:27.151545] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.433 [2024-06-11 13:02:27.151743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.433 [2024-06-11 13:02:27.151811] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.433 13:02:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:08.691 [2024-06-11 13:02:27.377352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.691 BaseBdev1 00:17:08.691 13:02:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:08.691 13:02:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:08.691 13:02:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:08.691 13:02:27 -- common/autotest_common.sh@889 -- # local i 00:17:08.691 13:02:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:08.691 13:02:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:08.691 13:02:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.949 13:02:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:08.949 [ 00:17:08.949 { 00:17:08.949 "name": "BaseBdev1", 00:17:08.949 "aliases": [ 00:17:08.949 "bc41ec93-0226-4dc6-b87c-2695602e6c31" 00:17:08.949 ], 00:17:08.949 "product_name": "Malloc disk", 00:17:08.949 "block_size": 512, 00:17:08.949 "num_blocks": 65536, 00:17:08.949 "uuid": "bc41ec93-0226-4dc6-b87c-2695602e6c31", 00:17:08.949 "assigned_rate_limits": { 00:17:08.949 "rw_ios_per_sec": 0, 00:17:08.949 "rw_mbytes_per_sec": 0, 00:17:08.949 "r_mbytes_per_sec": 0, 00:17:08.949 "w_mbytes_per_sec": 0 00:17:08.949 }, 00:17:08.949 "claimed": true, 00:17:08.949 "claim_type": "exclusive_write", 00:17:08.949 "zoned": false, 00:17:08.949 "supported_io_types": { 00:17:08.949 "read": true, 00:17:08.949 "write": true, 00:17:08.949 "unmap": true, 00:17:08.949 "write_zeroes": true, 00:17:08.949 "flush": true, 00:17:08.949 "reset": true, 00:17:08.949 "compare": false, 00:17:08.949 "compare_and_write": false, 00:17:08.949 "abort": true, 00:17:08.949 "nvme_admin": false, 00:17:08.949 "nvme_io": false 00:17:08.949 }, 00:17:08.949 "memory_domains": [ 00:17:08.949 { 00:17:08.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.949 "dma_device_type": 2 00:17:08.949 } 00:17:08.949 ], 00:17:08.949 "driver_specific": {} 00:17:08.949 } 00:17:08.949 ] 00:17:08.949 13:02:27 -- common/autotest_common.sh@895 -- # return 0 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.949 13:02:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.207 13:02:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.207 13:02:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.207 13:02:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.207 "name": "Existed_Raid", 00:17:09.207 "uuid": "7d689189-2525-462c-bc52-716086434642", 00:17:09.207 "strip_size_kb": 0, 00:17:09.207 "state": "configuring", 00:17:09.207 "raid_level": "raid1", 00:17:09.207 "superblock": true, 00:17:09.207 "num_base_bdevs": 3, 00:17:09.207 "num_base_bdevs_discovered": 1, 00:17:09.207 "num_base_bdevs_operational": 3, 00:17:09.207 "base_bdevs_list": [ 00:17:09.207 { 00:17:09.207 "name": "BaseBdev1", 00:17:09.207 "uuid": "bc41ec93-0226-4dc6-b87c-2695602e6c31", 00:17:09.207 "is_configured": true, 00:17:09.207 "data_offset": 2048, 00:17:09.207 "data_size": 63488 00:17:09.207 }, 00:17:09.207 { 00:17:09.207 "name": "BaseBdev2", 00:17:09.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.207 "is_configured": false, 00:17:09.207 "data_offset": 0, 00:17:09.207 "data_size": 0 00:17:09.207 }, 00:17:09.207 { 00:17:09.207 "name": "BaseBdev3", 00:17:09.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.207 "is_configured": false, 00:17:09.207 "data_offset": 0, 00:17:09.207 "data_size": 0 00:17:09.207 } 00:17:09.207 ] 00:17:09.207 }' 00:17:09.207 13:02:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.207 13:02:27 -- common/autotest_common.sh@10 -- # set +x 00:17:10.143 13:02:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:10.143 [2024-06-11 13:02:28.817701] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.143 [2024-06-11 13:02:28.817952] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:10.143 13:02:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:10.143 13:02:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:10.401 13:02:29 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:10.660 BaseBdev1 00:17:10.660 13:02:29 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:10.660 13:02:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:10.660 13:02:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:10.660 13:02:29 -- common/autotest_common.sh@889 -- # local i 00:17:10.660 13:02:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:10.660 13:02:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:10.660 13:02:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.918 13:02:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:10.918 [ 00:17:10.918 { 00:17:10.918 "name": "BaseBdev1", 00:17:10.918 "aliases": [ 00:17:10.919 "8884feef-43eb-4f72-99e4-d39b62f1c0b5" 00:17:10.919 ], 00:17:10.919 "product_name": "Malloc disk", 00:17:10.919 "block_size": 512, 00:17:10.919 "num_blocks": 65536, 00:17:10.919 "uuid": "8884feef-43eb-4f72-99e4-d39b62f1c0b5", 00:17:10.919 "assigned_rate_limits": { 00:17:10.919 "rw_ios_per_sec": 0, 00:17:10.919 "rw_mbytes_per_sec": 0, 00:17:10.919 "r_mbytes_per_sec": 0, 00:17:10.919 "w_mbytes_per_sec": 0 00:17:10.919 }, 00:17:10.919 "claimed": false, 00:17:10.919 "zoned": false, 00:17:10.919 "supported_io_types": { 00:17:10.919 "read": true, 00:17:10.919 "write": true, 00:17:10.919 "unmap": true, 00:17:10.919 "write_zeroes": true, 00:17:10.919 "flush": true, 00:17:10.919 "reset": true, 00:17:10.919 "compare": false, 00:17:10.919 "compare_and_write": false, 00:17:10.919 "abort": true, 00:17:10.919 "nvme_admin": false, 00:17:10.919 "nvme_io": false 00:17:10.919 }, 00:17:10.919 "memory_domains": [ 00:17:10.919 { 00:17:10.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.919 "dma_device_type": 2 00:17:10.919 } 00:17:10.919 ], 00:17:10.919 "driver_specific": {} 00:17:10.919 } 00:17:10.919 ] 00:17:10.919 13:02:29 -- common/autotest_common.sh@895 -- # return 0 00:17:10.919 13:02:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:11.177 [2024-06-11 13:02:29.987795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.177 [2024-06-11 13:02:29.989672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.177 [2024-06-11 13:02:29.989889] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.177 [2024-06-11 13:02:29.990014] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:11.177 [2024-06-11 13:02:29.990073] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.177 13:02:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.177 13:02:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.177 13:02:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.436 13:02:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.436 "name": "Existed_Raid", 00:17:11.436 "uuid": "4a94e5a5-303e-4447-8515-67d2da7c8a9b", 00:17:11.436 "strip_size_kb": 0, 00:17:11.436 "state": "configuring", 00:17:11.436 "raid_level": "raid1", 00:17:11.436 "superblock": true, 00:17:11.436 "num_base_bdevs": 3, 00:17:11.436 "num_base_bdevs_discovered": 1, 00:17:11.436 "num_base_bdevs_operational": 3, 00:17:11.436 "base_bdevs_list": [ 00:17:11.436 { 00:17:11.436 "name": "BaseBdev1", 00:17:11.436 "uuid": "8884feef-43eb-4f72-99e4-d39b62f1c0b5", 00:17:11.436 "is_configured": true, 00:17:11.436 "data_offset": 2048, 00:17:11.436 "data_size": 63488 00:17:11.436 }, 00:17:11.436 { 00:17:11.436 "name": "BaseBdev2", 00:17:11.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.436 "is_configured": false, 00:17:11.436 "data_offset": 0, 00:17:11.436 "data_size": 0 00:17:11.436 }, 00:17:11.436 { 00:17:11.436 "name": "BaseBdev3", 00:17:11.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.436 "is_configured": false, 00:17:11.436 "data_offset": 0, 00:17:11.436 "data_size": 0 00:17:11.436 } 00:17:11.436 ] 00:17:11.436 }' 00:17:11.436 13:02:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.436 13:02:30 -- common/autotest_common.sh@10 -- # set +x 00:17:12.371 13:02:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:12.371 [2024-06-11 13:02:31.194353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.371 BaseBdev2 00:17:12.371 13:02:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:12.371 13:02:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:12.371 13:02:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:12.371 13:02:31 -- common/autotest_common.sh@889 -- # local i 00:17:12.371 13:02:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:12.371 13:02:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:12.371 13:02:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.937 13:02:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:12.937 [ 00:17:12.937 { 00:17:12.937 "name": "BaseBdev2", 00:17:12.937 "aliases": [ 00:17:12.937 "5e56d7fc-dcac-48b9-bd62-7b8ca4efb547" 00:17:12.937 ], 00:17:12.937 "product_name": "Malloc disk", 00:17:12.937 "block_size": 512, 00:17:12.937 "num_blocks": 65536, 00:17:12.937 "uuid": "5e56d7fc-dcac-48b9-bd62-7b8ca4efb547", 00:17:12.937 "assigned_rate_limits": { 00:17:12.937 "rw_ios_per_sec": 0, 00:17:12.937 "rw_mbytes_per_sec": 0, 00:17:12.937 "r_mbytes_per_sec": 0, 00:17:12.937 "w_mbytes_per_sec": 0 00:17:12.937 }, 00:17:12.937 "claimed": true, 00:17:12.937 "claim_type": "exclusive_write", 00:17:12.937 "zoned": false, 00:17:12.937 "supported_io_types": { 00:17:12.937 "read": true, 00:17:12.937 "write": true, 00:17:12.937 "unmap": true, 00:17:12.937 "write_zeroes": true, 00:17:12.937 "flush": true, 00:17:12.937 "reset": true, 00:17:12.937 "compare": false, 00:17:12.937 "compare_and_write": false, 00:17:12.937 "abort": true, 00:17:12.937 "nvme_admin": false, 00:17:12.937 "nvme_io": false 00:17:12.937 }, 00:17:12.937 "memory_domains": [ 00:17:12.937 { 00:17:12.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.937 "dma_device_type": 2 00:17:12.937 } 00:17:12.937 ], 00:17:12.937 "driver_specific": {} 00:17:12.937 } 00:17:12.937 ] 00:17:12.937 13:02:31 -- common/autotest_common.sh@895 -- # return 0 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.938 13:02:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.197 13:02:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.197 "name": "Existed_Raid", 00:17:13.197 "uuid": "4a94e5a5-303e-4447-8515-67d2da7c8a9b", 00:17:13.197 "strip_size_kb": 0, 00:17:13.197 "state": "configuring", 00:17:13.197 "raid_level": "raid1", 00:17:13.197 "superblock": true, 00:17:13.197 "num_base_bdevs": 3, 00:17:13.197 "num_base_bdevs_discovered": 2, 00:17:13.197 "num_base_bdevs_operational": 3, 00:17:13.197 "base_bdevs_list": [ 00:17:13.197 { 00:17:13.197 "name": "BaseBdev1", 00:17:13.197 "uuid": "8884feef-43eb-4f72-99e4-d39b62f1c0b5", 00:17:13.197 "is_configured": true, 00:17:13.197 "data_offset": 2048, 00:17:13.197 "data_size": 63488 00:17:13.197 }, 00:17:13.197 { 00:17:13.197 "name": "BaseBdev2", 00:17:13.197 "uuid": "5e56d7fc-dcac-48b9-bd62-7b8ca4efb547", 00:17:13.197 "is_configured": true, 00:17:13.197 "data_offset": 2048, 00:17:13.197 "data_size": 63488 00:17:13.197 }, 00:17:13.197 { 00:17:13.197 "name": "BaseBdev3", 00:17:13.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.197 "is_configured": false, 00:17:13.197 "data_offset": 0, 00:17:13.197 "data_size": 0 00:17:13.197 } 00:17:13.197 ] 00:17:13.197 }' 00:17:13.197 13:02:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.197 13:02:31 -- common/autotest_common.sh@10 -- # set +x 00:17:13.771 13:02:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:14.028 [2024-06-11 13:02:32.811425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:14.028 [2024-06-11 13:02:32.811858] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:14.028 [2024-06-11 13:02:32.811981] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:14.028 BaseBdev3 00:17:14.028 [2024-06-11 13:02:32.812143] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:14.028 [2024-06-11 13:02:32.812664] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:14.028 [2024-06-11 13:02:32.812825] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:14.028 [2024-06-11 13:02:32.813073] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.028 13:02:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:14.028 13:02:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:14.028 13:02:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:14.028 13:02:32 -- common/autotest_common.sh@889 -- # local i 00:17:14.028 13:02:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:14.028 13:02:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:14.028 13:02:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.286 13:02:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:14.544 [ 00:17:14.544 { 00:17:14.544 "name": "BaseBdev3", 00:17:14.544 "aliases": [ 00:17:14.544 "5007c56d-ffe6-4aab-981c-e2519b8e54e2" 00:17:14.544 ], 00:17:14.544 "product_name": "Malloc disk", 00:17:14.544 "block_size": 512, 00:17:14.544 "num_blocks": 65536, 00:17:14.544 "uuid": "5007c56d-ffe6-4aab-981c-e2519b8e54e2", 00:17:14.544 "assigned_rate_limits": { 00:17:14.544 "rw_ios_per_sec": 0, 00:17:14.544 "rw_mbytes_per_sec": 0, 00:17:14.544 "r_mbytes_per_sec": 0, 00:17:14.544 "w_mbytes_per_sec": 0 00:17:14.544 }, 00:17:14.544 "claimed": true, 00:17:14.544 "claim_type": "exclusive_write", 00:17:14.544 "zoned": false, 00:17:14.544 "supported_io_types": { 00:17:14.544 "read": true, 00:17:14.544 "write": true, 00:17:14.544 "unmap": true, 00:17:14.544 "write_zeroes": true, 00:17:14.544 "flush": true, 00:17:14.544 "reset": true, 00:17:14.544 "compare": false, 00:17:14.544 "compare_and_write": false, 00:17:14.544 "abort": true, 00:17:14.544 "nvme_admin": false, 00:17:14.544 "nvme_io": false 00:17:14.544 }, 00:17:14.544 "memory_domains": [ 00:17:14.544 { 00:17:14.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.544 "dma_device_type": 2 00:17:14.544 } 00:17:14.544 ], 00:17:14.544 "driver_specific": {} 00:17:14.544 } 00:17:14.544 ] 00:17:14.544 13:02:33 -- common/autotest_common.sh@895 -- # return 0 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.544 13:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.802 13:02:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.802 "name": "Existed_Raid", 00:17:14.802 "uuid": "4a94e5a5-303e-4447-8515-67d2da7c8a9b", 00:17:14.802 "strip_size_kb": 0, 00:17:14.802 "state": "online", 00:17:14.802 "raid_level": "raid1", 00:17:14.802 "superblock": true, 00:17:14.802 "num_base_bdevs": 3, 00:17:14.802 "num_base_bdevs_discovered": 3, 00:17:14.802 "num_base_bdevs_operational": 3, 00:17:14.802 "base_bdevs_list": [ 00:17:14.802 { 00:17:14.802 "name": "BaseBdev1", 00:17:14.802 "uuid": "8884feef-43eb-4f72-99e4-d39b62f1c0b5", 00:17:14.802 "is_configured": true, 00:17:14.802 "data_offset": 2048, 00:17:14.802 "data_size": 63488 00:17:14.802 }, 00:17:14.802 { 00:17:14.802 "name": "BaseBdev2", 00:17:14.802 "uuid": "5e56d7fc-dcac-48b9-bd62-7b8ca4efb547", 00:17:14.802 "is_configured": true, 00:17:14.802 "data_offset": 2048, 00:17:14.802 "data_size": 63488 00:17:14.802 }, 00:17:14.802 { 00:17:14.802 "name": "BaseBdev3", 00:17:14.802 "uuid": "5007c56d-ffe6-4aab-981c-e2519b8e54e2", 00:17:14.802 "is_configured": true, 00:17:14.802 "data_offset": 2048, 00:17:14.802 "data_size": 63488 00:17:14.802 } 00:17:14.802 ] 00:17:14.802 }' 00:17:14.802 13:02:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.802 13:02:33 -- common/autotest_common.sh@10 -- # set +x 00:17:15.370 13:02:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:15.628 [2024-06-11 13:02:34.391867] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.887 "name": "Existed_Raid", 00:17:15.887 "uuid": "4a94e5a5-303e-4447-8515-67d2da7c8a9b", 00:17:15.887 "strip_size_kb": 0, 00:17:15.887 "state": "online", 00:17:15.887 "raid_level": "raid1", 00:17:15.887 "superblock": true, 00:17:15.887 "num_base_bdevs": 3, 00:17:15.887 "num_base_bdevs_discovered": 2, 00:17:15.887 "num_base_bdevs_operational": 2, 00:17:15.887 "base_bdevs_list": [ 00:17:15.887 { 00:17:15.887 "name": null, 00:17:15.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.887 "is_configured": false, 00:17:15.887 "data_offset": 2048, 00:17:15.887 "data_size": 63488 00:17:15.887 }, 00:17:15.887 { 00:17:15.887 "name": "BaseBdev2", 00:17:15.887 "uuid": "5e56d7fc-dcac-48b9-bd62-7b8ca4efb547", 00:17:15.887 "is_configured": true, 00:17:15.887 "data_offset": 2048, 00:17:15.887 "data_size": 63488 00:17:15.887 }, 00:17:15.887 { 00:17:15.887 "name": "BaseBdev3", 00:17:15.887 "uuid": "5007c56d-ffe6-4aab-981c-e2519b8e54e2", 00:17:15.887 "is_configured": true, 00:17:15.887 "data_offset": 2048, 00:17:15.887 "data_size": 63488 00:17:15.887 } 00:17:15.887 ] 00:17:15.887 }' 00:17:15.887 13:02:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.887 13:02:34 -- common/autotest_common.sh@10 -- # set +x 00:17:16.822 13:02:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:16.822 13:02:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.822 13:02:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.822 13:02:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:16.822 13:02:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:16.822 13:02:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.822 13:02:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:17.081 [2024-06-11 13:02:35.738472] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:17.081 13:02:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:17.081 13:02:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:17.081 13:02:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.081 13:02:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:17.339 13:02:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:17.339 13:02:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.339 13:02:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:17.597 [2024-06-11 13:02:36.292189] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:17.597 [2024-06-11 13:02:36.292368] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.597 [2024-06-11 13:02:36.292558] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.597 [2024-06-11 13:02:36.355817] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.597 [2024-06-11 13:02:36.356033] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:17.597 13:02:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:17.597 13:02:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:17.597 13:02:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.597 13:02:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:17.857 13:02:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:17.857 13:02:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:17.857 13:02:36 -- bdev/bdev_raid.sh@287 -- # killprocess 120372 00:17:17.857 13:02:36 -- common/autotest_common.sh@926 -- # '[' -z 120372 ']' 00:17:17.857 13:02:36 -- common/autotest_common.sh@930 -- # kill -0 120372 00:17:17.857 13:02:36 -- common/autotest_common.sh@931 -- # uname 00:17:17.857 13:02:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.857 13:02:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120372 00:17:17.857 killing process with pid 120372 00:17:17.857 13:02:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:17.857 13:02:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:17.857 13:02:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120372' 00:17:17.857 13:02:36 -- common/autotest_common.sh@945 -- # kill 120372 00:17:17.857 13:02:36 -- common/autotest_common.sh@950 -- # wait 120372 00:17:17.857 [2024-06-11 13:02:36.622857] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.857 [2024-06-11 13:02:36.623017] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.792 ************************************ 00:17:18.792 END TEST raid_state_function_test_sb 00:17:18.792 ************************************ 00:17:18.792 13:02:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:18.792 00:17:18.792 real 0m13.029s 00:17:18.792 user 0m23.238s 00:17:18.792 sys 0m1.484s 00:17:18.792 13:02:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.792 13:02:37 -- common/autotest_common.sh@10 -- # set +x 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:19.049 13:02:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:19.049 13:02:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:19.049 13:02:37 -- common/autotest_common.sh@10 -- # set +x 00:17:19.049 ************************************ 00:17:19.049 START TEST raid_superblock_test 00:17:19.049 ************************************ 00:17:19.049 13:02:37 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@357 -- # raid_pid=120779 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:19.049 13:02:37 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120779 /var/tmp/spdk-raid.sock 00:17:19.049 13:02:37 -- common/autotest_common.sh@819 -- # '[' -z 120779 ']' 00:17:19.049 13:02:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:19.049 13:02:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:19.049 13:02:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:19.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:19.049 13:02:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:19.049 13:02:37 -- common/autotest_common.sh@10 -- # set +x 00:17:19.049 [2024-06-11 13:02:37.713993] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:19.049 [2024-06-11 13:02:37.714367] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120779 ] 00:17:19.049 [2024-06-11 13:02:37.875792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.307 [2024-06-11 13:02:38.041748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.566 [2024-06-11 13:02:38.210741] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.823 13:02:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:19.823 13:02:38 -- common/autotest_common.sh@852 -- # return 0 00:17:19.823 13:02:38 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:19.823 13:02:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:19.824 13:02:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:19.824 13:02:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:19.824 13:02:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:19.824 13:02:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:19.824 13:02:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:19.824 13:02:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:19.824 13:02:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:20.082 malloc1 00:17:20.082 13:02:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.340 [2024-06-11 13:02:39.083704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.340 [2024-06-11 13:02:39.083941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.340 [2024-06-11 13:02:39.084081] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:20.340 [2024-06-11 13:02:39.084237] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.340 [2024-06-11 13:02:39.086428] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.340 [2024-06-11 13:02:39.086604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.340 pt1 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.340 13:02:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:20.598 malloc2 00:17:20.598 13:02:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.856 [2024-06-11 13:02:39.518842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.856 [2024-06-11 13:02:39.519072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.856 [2024-06-11 13:02:39.519242] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:20.856 [2024-06-11 13:02:39.519385] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.856 [2024-06-11 13:02:39.521528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.856 [2024-06-11 13:02:39.521715] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.856 pt2 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.856 13:02:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:21.114 malloc3 00:17:21.114 13:02:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:21.114 [2024-06-11 13:02:39.925335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:21.114 [2024-06-11 13:02:39.925630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.114 [2024-06-11 13:02:39.925824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:21.114 [2024-06-11 13:02:39.925978] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.114 [2024-06-11 13:02:39.927843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.114 [2024-06-11 13:02:39.928021] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:21.114 pt3 00:17:21.114 13:02:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:21.114 13:02:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:21.114 13:02:39 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:21.372 [2024-06-11 13:02:40.113384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:21.372 [2024-06-11 13:02:40.115083] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.372 [2024-06-11 13:02:40.115262] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:21.372 [2024-06-11 13:02:40.115485] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:21.372 [2024-06-11 13:02:40.115591] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:21.372 [2024-06-11 13:02:40.115835] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:21.372 [2024-06-11 13:02:40.116279] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:21.372 [2024-06-11 13:02:40.116397] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:21.372 [2024-06-11 13:02:40.116630] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.372 13:02:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.373 13:02:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.373 13:02:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.373 13:02:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.631 13:02:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.631 "name": "raid_bdev1", 00:17:21.631 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:21.631 "strip_size_kb": 0, 00:17:21.631 "state": "online", 00:17:21.631 "raid_level": "raid1", 00:17:21.631 "superblock": true, 00:17:21.631 "num_base_bdevs": 3, 00:17:21.631 "num_base_bdevs_discovered": 3, 00:17:21.631 "num_base_bdevs_operational": 3, 00:17:21.631 "base_bdevs_list": [ 00:17:21.631 { 00:17:21.631 "name": "pt1", 00:17:21.631 "uuid": "495eb547-1477-5475-96a1-1a488cd11111", 00:17:21.631 "is_configured": true, 00:17:21.631 "data_offset": 2048, 00:17:21.631 "data_size": 63488 00:17:21.631 }, 00:17:21.631 { 00:17:21.631 "name": "pt2", 00:17:21.631 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:21.631 "is_configured": true, 00:17:21.631 "data_offset": 2048, 00:17:21.631 "data_size": 63488 00:17:21.631 }, 00:17:21.631 { 00:17:21.631 "name": "pt3", 00:17:21.631 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:21.631 "is_configured": true, 00:17:21.631 "data_offset": 2048, 00:17:21.631 "data_size": 63488 00:17:21.631 } 00:17:21.631 ] 00:17:21.631 }' 00:17:21.631 13:02:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.631 13:02:40 -- common/autotest_common.sh@10 -- # set +x 00:17:22.198 13:02:40 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.198 13:02:40 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:22.457 [2024-06-11 13:02:41.209878] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.457 13:02:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=401b84a1-7fa5-430d-8885-574273e0f4cf 00:17:22.457 13:02:41 -- bdev/bdev_raid.sh@380 -- # '[' -z 401b84a1-7fa5-430d-8885-574273e0f4cf ']' 00:17:22.457 13:02:41 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:22.715 [2024-06-11 13:02:41.465741] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.715 [2024-06-11 13:02:41.465886] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.715 [2024-06-11 13:02:41.466065] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.715 [2024-06-11 13:02:41.466243] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.715 [2024-06-11 13:02:41.466365] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:22.715 13:02:41 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.715 13:02:41 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:22.973 13:02:41 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:22.973 13:02:41 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:22.973 13:02:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:22.973 13:02:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:23.232 13:02:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.232 13:02:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:23.489 13:02:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.489 13:02:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:23.747 13:02:42 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:23.747 13:02:42 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:23.747 13:02:42 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:23.747 13:02:42 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:23.747 13:02:42 -- common/autotest_common.sh@640 -- # local es=0 00:17:23.747 13:02:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:23.747 13:02:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.747 13:02:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:23.747 13:02:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.747 13:02:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:23.747 13:02:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.747 13:02:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:23.747 13:02:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.747 13:02:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:23.747 13:02:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:24.005 [2024-06-11 13:02:42.754088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:24.005 [2024-06-11 13:02:42.755752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:24.005 [2024-06-11 13:02:42.755939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:24.005 [2024-06-11 13:02:42.756030] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:24.005 [2024-06-11 13:02:42.756272] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:24.005 [2024-06-11 13:02:42.756413] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:24.005 [2024-06-11 13:02:42.756551] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.005 [2024-06-11 13:02:42.756668] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:24.005 request: 00:17:24.005 { 00:17:24.005 "name": "raid_bdev1", 00:17:24.005 "raid_level": "raid1", 00:17:24.005 "base_bdevs": [ 00:17:24.005 "malloc1", 00:17:24.005 "malloc2", 00:17:24.005 "malloc3" 00:17:24.005 ], 00:17:24.005 "superblock": false, 00:17:24.005 "method": "bdev_raid_create", 00:17:24.005 "req_id": 1 00:17:24.005 } 00:17:24.005 Got JSON-RPC error response 00:17:24.005 response: 00:17:24.005 { 00:17:24.005 "code": -17, 00:17:24.005 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:24.005 } 00:17:24.005 13:02:42 -- common/autotest_common.sh@643 -- # es=1 00:17:24.005 13:02:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:24.005 13:02:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:24.005 13:02:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:24.005 13:02:42 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.005 13:02:42 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:24.263 13:02:42 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:24.263 13:02:42 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:24.263 13:02:42 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.521 [2024-06-11 13:02:43.146124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.521 [2024-06-11 13:02:43.146542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.521 [2024-06-11 13:02:43.146686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:24.521 [2024-06-11 13:02:43.146791] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.521 [2024-06-11 13:02:43.148954] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.521 [2024-06-11 13:02:43.149108] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.521 [2024-06-11 13:02:43.149358] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:24.521 [2024-06-11 13:02:43.149611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.521 pt1 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.521 13:02:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.778 13:02:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.778 "name": "raid_bdev1", 00:17:24.778 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:24.778 "strip_size_kb": 0, 00:17:24.778 "state": "configuring", 00:17:24.778 "raid_level": "raid1", 00:17:24.778 "superblock": true, 00:17:24.778 "num_base_bdevs": 3, 00:17:24.778 "num_base_bdevs_discovered": 1, 00:17:24.778 "num_base_bdevs_operational": 3, 00:17:24.778 "base_bdevs_list": [ 00:17:24.778 { 00:17:24.778 "name": "pt1", 00:17:24.778 "uuid": "495eb547-1477-5475-96a1-1a488cd11111", 00:17:24.778 "is_configured": true, 00:17:24.778 "data_offset": 2048, 00:17:24.778 "data_size": 63488 00:17:24.778 }, 00:17:24.778 { 00:17:24.778 "name": null, 00:17:24.778 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:24.778 "is_configured": false, 00:17:24.778 "data_offset": 2048, 00:17:24.778 "data_size": 63488 00:17:24.778 }, 00:17:24.778 { 00:17:24.778 "name": null, 00:17:24.778 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:24.778 "is_configured": false, 00:17:24.778 "data_offset": 2048, 00:17:24.778 "data_size": 63488 00:17:24.778 } 00:17:24.778 ] 00:17:24.778 }' 00:17:24.778 13:02:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.778 13:02:43 -- common/autotest_common.sh@10 -- # set +x 00:17:25.343 13:02:44 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:25.343 13:02:44 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.602 [2024-06-11 13:02:44.218355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.602 [2024-06-11 13:02:44.218578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.602 [2024-06-11 13:02:44.218743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:25.602 [2024-06-11 13:02:44.218857] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.602 [2024-06-11 13:02:44.219426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.602 [2024-06-11 13:02:44.219573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.602 [2024-06-11 13:02:44.219780] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:25.602 [2024-06-11 13:02:44.219907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.602 pt2 00:17:25.602 13:02:44 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:25.860 [2024-06-11 13:02:44.458412] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.861 "name": "raid_bdev1", 00:17:25.861 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:25.861 "strip_size_kb": 0, 00:17:25.861 "state": "configuring", 00:17:25.861 "raid_level": "raid1", 00:17:25.861 "superblock": true, 00:17:25.861 "num_base_bdevs": 3, 00:17:25.861 "num_base_bdevs_discovered": 1, 00:17:25.861 "num_base_bdevs_operational": 3, 00:17:25.861 "base_bdevs_list": [ 00:17:25.861 { 00:17:25.861 "name": "pt1", 00:17:25.861 "uuid": "495eb547-1477-5475-96a1-1a488cd11111", 00:17:25.861 "is_configured": true, 00:17:25.861 "data_offset": 2048, 00:17:25.861 "data_size": 63488 00:17:25.861 }, 00:17:25.861 { 00:17:25.861 "name": null, 00:17:25.861 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:25.861 "is_configured": false, 00:17:25.861 "data_offset": 2048, 00:17:25.861 "data_size": 63488 00:17:25.861 }, 00:17:25.861 { 00:17:25.861 "name": null, 00:17:25.861 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:25.861 "is_configured": false, 00:17:25.861 "data_offset": 2048, 00:17:25.861 "data_size": 63488 00:17:25.861 } 00:17:25.861 ] 00:17:25.861 }' 00:17:25.861 13:02:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.861 13:02:44 -- common/autotest_common.sh@10 -- # set +x 00:17:26.797 13:02:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:26.797 13:02:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:26.797 13:02:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:26.797 [2024-06-11 13:02:45.598641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:26.797 [2024-06-11 13:02:45.598863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.797 [2024-06-11 13:02:45.599016] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:26.797 [2024-06-11 13:02:45.599145] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.797 [2024-06-11 13:02:45.599672] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.797 [2024-06-11 13:02:45.599827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:26.797 [2024-06-11 13:02:45.600042] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:26.797 [2024-06-11 13:02:45.600186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:26.797 pt2 00:17:26.797 13:02:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:26.797 13:02:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:26.797 13:02:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:27.055 [2024-06-11 13:02:45.782653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:27.056 [2024-06-11 13:02:45.782871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.056 [2024-06-11 13:02:45.782935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:27.056 [2024-06-11 13:02:45.783188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.056 [2024-06-11 13:02:45.783618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.056 [2024-06-11 13:02:45.783796] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:27.056 [2024-06-11 13:02:45.784032] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:27.056 [2024-06-11 13:02:45.784152] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:27.056 [2024-06-11 13:02:45.784344] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:27.056 [2024-06-11 13:02:45.784439] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:27.056 [2024-06-11 13:02:45.784596] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:27.056 [2024-06-11 13:02:45.785021] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:27.056 [2024-06-11 13:02:45.785137] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:27.056 [2024-06-11 13:02:45.785361] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.056 pt3 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.056 13:02:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.314 13:02:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.315 "name": "raid_bdev1", 00:17:27.315 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:27.315 "strip_size_kb": 0, 00:17:27.315 "state": "online", 00:17:27.315 "raid_level": "raid1", 00:17:27.315 "superblock": true, 00:17:27.315 "num_base_bdevs": 3, 00:17:27.315 "num_base_bdevs_discovered": 3, 00:17:27.315 "num_base_bdevs_operational": 3, 00:17:27.315 "base_bdevs_list": [ 00:17:27.315 { 00:17:27.315 "name": "pt1", 00:17:27.315 "uuid": "495eb547-1477-5475-96a1-1a488cd11111", 00:17:27.315 "is_configured": true, 00:17:27.315 "data_offset": 2048, 00:17:27.315 "data_size": 63488 00:17:27.315 }, 00:17:27.315 { 00:17:27.315 "name": "pt2", 00:17:27.315 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:27.315 "is_configured": true, 00:17:27.315 "data_offset": 2048, 00:17:27.315 "data_size": 63488 00:17:27.315 }, 00:17:27.315 { 00:17:27.315 "name": "pt3", 00:17:27.315 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:27.315 "is_configured": true, 00:17:27.315 "data_offset": 2048, 00:17:27.315 "data_size": 63488 00:17:27.315 } 00:17:27.315 ] 00:17:27.315 }' 00:17:27.315 13:02:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.315 13:02:46 -- common/autotest_common.sh@10 -- # set +x 00:17:27.882 13:02:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:27.882 13:02:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:28.140 [2024-06-11 13:02:46.895115] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.140 13:02:46 -- bdev/bdev_raid.sh@430 -- # '[' 401b84a1-7fa5-430d-8885-574273e0f4cf '!=' 401b84a1-7fa5-430d-8885-574273e0f4cf ']' 00:17:28.140 13:02:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:28.140 13:02:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:28.140 13:02:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:28.140 13:02:46 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:28.399 [2024-06-11 13:02:47.086967] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.399 13:02:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.658 13:02:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.658 "name": "raid_bdev1", 00:17:28.658 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:28.658 "strip_size_kb": 0, 00:17:28.658 "state": "online", 00:17:28.658 "raid_level": "raid1", 00:17:28.658 "superblock": true, 00:17:28.658 "num_base_bdevs": 3, 00:17:28.658 "num_base_bdevs_discovered": 2, 00:17:28.658 "num_base_bdevs_operational": 2, 00:17:28.658 "base_bdevs_list": [ 00:17:28.658 { 00:17:28.658 "name": null, 00:17:28.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.658 "is_configured": false, 00:17:28.658 "data_offset": 2048, 00:17:28.658 "data_size": 63488 00:17:28.658 }, 00:17:28.658 { 00:17:28.658 "name": "pt2", 00:17:28.658 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:28.658 "is_configured": true, 00:17:28.658 "data_offset": 2048, 00:17:28.658 "data_size": 63488 00:17:28.658 }, 00:17:28.658 { 00:17:28.658 "name": "pt3", 00:17:28.658 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:28.658 "is_configured": true, 00:17:28.658 "data_offset": 2048, 00:17:28.658 "data_size": 63488 00:17:28.658 } 00:17:28.658 ] 00:17:28.658 }' 00:17:28.658 13:02:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.658 13:02:47 -- common/autotest_common.sh@10 -- # set +x 00:17:29.226 13:02:47 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:29.485 [2024-06-11 13:02:48.191190] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.485 [2024-06-11 13:02:48.191338] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.485 [2024-06-11 13:02:48.191526] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.485 [2024-06-11 13:02:48.191685] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.485 [2024-06-11 13:02:48.191783] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:29.485 13:02:48 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.485 13:02:48 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:29.743 13:02:48 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:29.744 13:02:48 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:29.744 13:02:48 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:29.744 13:02:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:29.744 13:02:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:30.002 13:02:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:30.002 13:02:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:30.002 13:02:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:30.260 13:02:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:30.260 13:02:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:30.260 13:02:48 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:30.260 13:02:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:30.260 13:02:48 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.260 [2024-06-11 13:02:49.087325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.260 [2024-06-11 13:02:49.087571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.260 [2024-06-11 13:02:49.087752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:30.260 [2024-06-11 13:02:49.087871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.260 [2024-06-11 13:02:49.090404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.260 [2024-06-11 13:02:49.090574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.260 [2024-06-11 13:02:49.090819] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:30.260 [2024-06-11 13:02:49.090985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.260 pt2 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.519 "name": "raid_bdev1", 00:17:30.519 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:30.519 "strip_size_kb": 0, 00:17:30.519 "state": "configuring", 00:17:30.519 "raid_level": "raid1", 00:17:30.519 "superblock": true, 00:17:30.519 "num_base_bdevs": 3, 00:17:30.519 "num_base_bdevs_discovered": 1, 00:17:30.519 "num_base_bdevs_operational": 2, 00:17:30.519 "base_bdevs_list": [ 00:17:30.519 { 00:17:30.519 "name": null, 00:17:30.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.519 "is_configured": false, 00:17:30.519 "data_offset": 2048, 00:17:30.519 "data_size": 63488 00:17:30.519 }, 00:17:30.519 { 00:17:30.519 "name": "pt2", 00:17:30.519 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:30.519 "is_configured": true, 00:17:30.519 "data_offset": 2048, 00:17:30.519 "data_size": 63488 00:17:30.519 }, 00:17:30.519 { 00:17:30.519 "name": null, 00:17:30.519 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:30.519 "is_configured": false, 00:17:30.519 "data_offset": 2048, 00:17:30.519 "data_size": 63488 00:17:30.519 } 00:17:30.519 ] 00:17:30.519 }' 00:17:30.519 13:02:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.519 13:02:49 -- common/autotest_common.sh@10 -- # set +x 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:31.454 [2024-06-11 13:02:50.267616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:31.454 [2024-06-11 13:02:50.267853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.454 [2024-06-11 13:02:50.268043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:31.454 [2024-06-11 13:02:50.268177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.454 [2024-06-11 13:02:50.268790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.454 [2024-06-11 13:02:50.268945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:31.454 [2024-06-11 13:02:50.269213] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:31.454 [2024-06-11 13:02:50.269344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:31.454 [2024-06-11 13:02:50.269647] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:17:31.454 [2024-06-11 13:02:50.269781] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:31.454 [2024-06-11 13:02:50.269956] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:31.454 [2024-06-11 13:02:50.270378] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:17:31.454 [2024-06-11 13:02:50.270504] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:17:31.454 [2024-06-11 13:02:50.270727] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.454 pt3 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.454 13:02:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.713 13:02:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.713 "name": "raid_bdev1", 00:17:31.713 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:31.713 "strip_size_kb": 0, 00:17:31.713 "state": "online", 00:17:31.713 "raid_level": "raid1", 00:17:31.713 "superblock": true, 00:17:31.713 "num_base_bdevs": 3, 00:17:31.713 "num_base_bdevs_discovered": 2, 00:17:31.713 "num_base_bdevs_operational": 2, 00:17:31.713 "base_bdevs_list": [ 00:17:31.713 { 00:17:31.713 "name": null, 00:17:31.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.713 "is_configured": false, 00:17:31.713 "data_offset": 2048, 00:17:31.713 "data_size": 63488 00:17:31.713 }, 00:17:31.713 { 00:17:31.713 "name": "pt2", 00:17:31.713 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:31.713 "is_configured": true, 00:17:31.713 "data_offset": 2048, 00:17:31.713 "data_size": 63488 00:17:31.713 }, 00:17:31.713 { 00:17:31.713 "name": "pt3", 00:17:31.713 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:31.713 "is_configured": true, 00:17:31.713 "data_offset": 2048, 00:17:31.713 "data_size": 63488 00:17:31.713 } 00:17:31.713 ] 00:17:31.713 }' 00:17:31.713 13:02:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.713 13:02:50 -- common/autotest_common.sh@10 -- # set +x 00:17:32.649 13:02:51 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:32.649 13:02:51 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:32.649 [2024-06-11 13:02:51.399828] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.649 [2024-06-11 13:02:51.400020] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.649 [2024-06-11 13:02:51.400229] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.649 [2024-06-11 13:02:51.400395] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.649 [2024-06-11 13:02:51.400498] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:17:32.649 13:02:51 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.649 13:02:51 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:32.908 13:02:51 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:32.908 13:02:51 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:32.908 13:02:51 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.167 [2024-06-11 13:02:51.851874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.167 [2024-06-11 13:02:51.852122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.167 [2024-06-11 13:02:51.852291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:33.167 [2024-06-11 13:02:51.852412] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.167 [2024-06-11 13:02:51.854605] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.167 [2024-06-11 13:02:51.854775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.167 [2024-06-11 13:02:51.855017] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:33.167 [2024-06-11 13:02:51.855180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.167 pt1 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.167 13:02:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.443 13:02:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.444 "name": "raid_bdev1", 00:17:33.444 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:33.444 "strip_size_kb": 0, 00:17:33.444 "state": "configuring", 00:17:33.444 "raid_level": "raid1", 00:17:33.444 "superblock": true, 00:17:33.444 "num_base_bdevs": 3, 00:17:33.444 "num_base_bdevs_discovered": 1, 00:17:33.444 "num_base_bdevs_operational": 3, 00:17:33.444 "base_bdevs_list": [ 00:17:33.444 { 00:17:33.444 "name": "pt1", 00:17:33.444 "uuid": "495eb547-1477-5475-96a1-1a488cd11111", 00:17:33.444 "is_configured": true, 00:17:33.444 "data_offset": 2048, 00:17:33.444 "data_size": 63488 00:17:33.444 }, 00:17:33.444 { 00:17:33.444 "name": null, 00:17:33.444 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:33.444 "is_configured": false, 00:17:33.444 "data_offset": 2048, 00:17:33.444 "data_size": 63488 00:17:33.444 }, 00:17:33.444 { 00:17:33.444 "name": null, 00:17:33.444 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:33.444 "is_configured": false, 00:17:33.444 "data_offset": 2048, 00:17:33.444 "data_size": 63488 00:17:33.444 } 00:17:33.444 ] 00:17:33.444 }' 00:17:33.444 13:02:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.444 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:17:34.017 13:02:52 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:34.017 13:02:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:34.017 13:02:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:34.278 13:02:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:34.278 13:02:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:34.278 13:02:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:34.278 13:02:53 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:34.278 13:02:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:34.278 13:02:53 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:34.278 13:02:53 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:34.537 [2024-06-11 13:02:53.300244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:34.537 [2024-06-11 13:02:53.300563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.537 [2024-06-11 13:02:53.300746] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:34.537 [2024-06-11 13:02:53.300875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.537 [2024-06-11 13:02:53.301539] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.537 [2024-06-11 13:02:53.301715] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:34.537 [2024-06-11 13:02:53.301951] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:34.537 [2024-06-11 13:02:53.302053] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:34.537 [2024-06-11 13:02:53.302154] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.537 [2024-06-11 13:02:53.302260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:17:34.537 [2024-06-11 13:02:53.302447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:34.537 pt3 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.537 13:02:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.796 13:02:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:34.796 "name": "raid_bdev1", 00:17:34.796 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:34.796 "strip_size_kb": 0, 00:17:34.796 "state": "configuring", 00:17:34.796 "raid_level": "raid1", 00:17:34.796 "superblock": true, 00:17:34.796 "num_base_bdevs": 3, 00:17:34.796 "num_base_bdevs_discovered": 1, 00:17:34.796 "num_base_bdevs_operational": 2, 00:17:34.796 "base_bdevs_list": [ 00:17:34.796 { 00:17:34.796 "name": null, 00:17:34.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.796 "is_configured": false, 00:17:34.796 "data_offset": 2048, 00:17:34.796 "data_size": 63488 00:17:34.796 }, 00:17:34.796 { 00:17:34.796 "name": null, 00:17:34.796 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:34.796 "is_configured": false, 00:17:34.796 "data_offset": 2048, 00:17:34.796 "data_size": 63488 00:17:34.796 }, 00:17:34.796 { 00:17:34.796 "name": "pt3", 00:17:34.796 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:34.796 "is_configured": true, 00:17:34.796 "data_offset": 2048, 00:17:34.796 "data_size": 63488 00:17:34.796 } 00:17:34.796 ] 00:17:34.796 }' 00:17:34.796 13:02:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:34.796 13:02:53 -- common/autotest_common.sh@10 -- # set +x 00:17:35.364 13:02:54 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:35.364 13:02:54 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:35.364 13:02:54 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.622 [2024-06-11 13:02:54.308431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.622 [2024-06-11 13:02:54.308761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.622 [2024-06-11 13:02:54.308922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:35.622 [2024-06-11 13:02:54.309052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.622 [2024-06-11 13:02:54.309820] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.622 [2024-06-11 13:02:54.310035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.622 [2024-06-11 13:02:54.310291] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:35.622 [2024-06-11 13:02:54.310431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.622 [2024-06-11 13:02:54.310667] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:17:35.622 [2024-06-11 13:02:54.310817] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:35.622 [2024-06-11 13:02:54.311047] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:35.622 [2024-06-11 13:02:54.311529] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:17:35.622 [2024-06-11 13:02:54.311663] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:17:35.622 [2024-06-11 13:02:54.311918] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.622 pt2 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.622 13:02:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.880 13:02:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.880 "name": "raid_bdev1", 00:17:35.880 "uuid": "401b84a1-7fa5-430d-8885-574273e0f4cf", 00:17:35.880 "strip_size_kb": 0, 00:17:35.880 "state": "online", 00:17:35.880 "raid_level": "raid1", 00:17:35.880 "superblock": true, 00:17:35.880 "num_base_bdevs": 3, 00:17:35.880 "num_base_bdevs_discovered": 2, 00:17:35.880 "num_base_bdevs_operational": 2, 00:17:35.880 "base_bdevs_list": [ 00:17:35.880 { 00:17:35.880 "name": null, 00:17:35.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.880 "is_configured": false, 00:17:35.880 "data_offset": 2048, 00:17:35.880 "data_size": 63488 00:17:35.880 }, 00:17:35.880 { 00:17:35.880 "name": "pt2", 00:17:35.881 "uuid": "3683a3ff-26af-5a21-ade9-8797190fb0e8", 00:17:35.881 "is_configured": true, 00:17:35.881 "data_offset": 2048, 00:17:35.881 "data_size": 63488 00:17:35.881 }, 00:17:35.881 { 00:17:35.881 "name": "pt3", 00:17:35.881 "uuid": "ba8a6e15-1312-5c8b-a452-b4ff46e6f8c6", 00:17:35.881 "is_configured": true, 00:17:35.881 "data_offset": 2048, 00:17:35.881 "data_size": 63488 00:17:35.881 } 00:17:35.881 ] 00:17:35.881 }' 00:17:35.881 13:02:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.881 13:02:54 -- common/autotest_common.sh@10 -- # set +x 00:17:36.815 13:02:55 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:36.815 13:02:55 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:36.815 [2024-06-11 13:02:55.468859] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.815 13:02:55 -- bdev/bdev_raid.sh@506 -- # '[' 401b84a1-7fa5-430d-8885-574273e0f4cf '!=' 401b84a1-7fa5-430d-8885-574273e0f4cf ']' 00:17:36.815 13:02:55 -- bdev/bdev_raid.sh@511 -- # killprocess 120779 00:17:36.815 13:02:55 -- common/autotest_common.sh@926 -- # '[' -z 120779 ']' 00:17:36.815 13:02:55 -- common/autotest_common.sh@930 -- # kill -0 120779 00:17:36.815 13:02:55 -- common/autotest_common.sh@931 -- # uname 00:17:36.815 13:02:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.815 13:02:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120779 00:17:36.815 killing process with pid 120779 00:17:36.815 13:02:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:36.815 13:02:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:36.815 13:02:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120779' 00:17:36.815 13:02:55 -- common/autotest_common.sh@945 -- # kill 120779 00:17:36.815 13:02:55 -- common/autotest_common.sh@950 -- # wait 120779 00:17:36.815 [2024-06-11 13:02:55.503569] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.815 [2024-06-11 13:02:55.503642] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.815 [2024-06-11 13:02:55.503714] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.816 [2024-06-11 13:02:55.503765] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:17:37.074 [2024-06-11 13:02:55.706775] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.009 ************************************ 00:17:38.009 END TEST raid_superblock_test 00:17:38.009 ************************************ 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:38.009 00:17:38.009 real 0m19.069s 00:17:38.009 user 0m35.337s 00:17:38.009 sys 0m1.991s 00:17:38.009 13:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.009 13:02:56 -- common/autotest_common.sh@10 -- # set +x 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:38.009 13:02:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:38.009 13:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:38.009 13:02:56 -- common/autotest_common.sh@10 -- # set +x 00:17:38.009 ************************************ 00:17:38.009 START TEST raid_state_function_test 00:17:38.009 ************************************ 00:17:38.009 13:02:56 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=121423 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121423' 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:38.009 Process raid pid: 121423 00:17:38.009 13:02:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121423 /var/tmp/spdk-raid.sock 00:17:38.009 13:02:56 -- common/autotest_common.sh@819 -- # '[' -z 121423 ']' 00:17:38.009 13:02:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:38.009 13:02:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:38.009 13:02:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:38.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:38.009 13:02:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:38.009 13:02:56 -- common/autotest_common.sh@10 -- # set +x 00:17:38.009 [2024-06-11 13:02:56.840873] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:38.009 [2024-06-11 13:02:56.841266] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.269 [2024-06-11 13:02:57.010885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.527 [2024-06-11 13:02:57.256165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.785 [2024-06-11 13:02:57.434656] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.043 13:02:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:39.043 13:02:57 -- common/autotest_common.sh@852 -- # return 0 00:17:39.043 13:02:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:39.301 [2024-06-11 13:02:57.961346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.301 [2024-06-11 13:02:57.961647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.302 [2024-06-11 13:02:57.961765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:39.302 [2024-06-11 13:02:57.961830] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:39.302 [2024-06-11 13:02:57.961953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:39.302 [2024-06-11 13:02:57.962038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:39.302 [2024-06-11 13:02:57.962250] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:39.302 [2024-06-11 13:02:57.962308] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.302 13:02:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.559 13:02:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.559 "name": "Existed_Raid", 00:17:39.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.559 "strip_size_kb": 64, 00:17:39.559 "state": "configuring", 00:17:39.559 "raid_level": "raid0", 00:17:39.559 "superblock": false, 00:17:39.559 "num_base_bdevs": 4, 00:17:39.559 "num_base_bdevs_discovered": 0, 00:17:39.559 "num_base_bdevs_operational": 4, 00:17:39.559 "base_bdevs_list": [ 00:17:39.559 { 00:17:39.559 "name": "BaseBdev1", 00:17:39.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.559 "is_configured": false, 00:17:39.559 "data_offset": 0, 00:17:39.559 "data_size": 0 00:17:39.559 }, 00:17:39.559 { 00:17:39.559 "name": "BaseBdev2", 00:17:39.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.559 "is_configured": false, 00:17:39.559 "data_offset": 0, 00:17:39.559 "data_size": 0 00:17:39.559 }, 00:17:39.559 { 00:17:39.559 "name": "BaseBdev3", 00:17:39.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.559 "is_configured": false, 00:17:39.559 "data_offset": 0, 00:17:39.559 "data_size": 0 00:17:39.559 }, 00:17:39.559 { 00:17:39.559 "name": "BaseBdev4", 00:17:39.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.559 "is_configured": false, 00:17:39.559 "data_offset": 0, 00:17:39.559 "data_size": 0 00:17:39.559 } 00:17:39.559 ] 00:17:39.559 }' 00:17:39.559 13:02:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.559 13:02:58 -- common/autotest_common.sh@10 -- # set +x 00:17:40.125 13:02:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:40.383 [2024-06-11 13:02:59.089435] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.383 [2024-06-11 13:02:59.089646] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:40.383 13:02:59 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:40.642 [2024-06-11 13:02:59.357624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.642 [2024-06-11 13:02:59.357806] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.642 [2024-06-11 13:02:59.357931] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.642 [2024-06-11 13:02:59.357996] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.642 [2024-06-11 13:02:59.358139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:40.642 [2024-06-11 13:02:59.358210] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:40.642 [2024-06-11 13:02:59.358305] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:40.642 [2024-06-11 13:02:59.358359] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:40.642 13:02:59 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:40.900 [2024-06-11 13:02:59.646858] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.900 BaseBdev1 00:17:40.900 13:02:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:40.900 13:02:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:40.900 13:02:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:40.900 13:02:59 -- common/autotest_common.sh@889 -- # local i 00:17:40.900 13:02:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:40.900 13:02:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:40.900 13:02:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.158 13:02:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:41.416 [ 00:17:41.416 { 00:17:41.416 "name": "BaseBdev1", 00:17:41.416 "aliases": [ 00:17:41.416 "3af733f4-1d9f-4a86-afda-7abc342ff1a4" 00:17:41.416 ], 00:17:41.416 "product_name": "Malloc disk", 00:17:41.416 "block_size": 512, 00:17:41.416 "num_blocks": 65536, 00:17:41.416 "uuid": "3af733f4-1d9f-4a86-afda-7abc342ff1a4", 00:17:41.416 "assigned_rate_limits": { 00:17:41.416 "rw_ios_per_sec": 0, 00:17:41.416 "rw_mbytes_per_sec": 0, 00:17:41.416 "r_mbytes_per_sec": 0, 00:17:41.416 "w_mbytes_per_sec": 0 00:17:41.416 }, 00:17:41.416 "claimed": true, 00:17:41.416 "claim_type": "exclusive_write", 00:17:41.416 "zoned": false, 00:17:41.416 "supported_io_types": { 00:17:41.416 "read": true, 00:17:41.416 "write": true, 00:17:41.416 "unmap": true, 00:17:41.416 "write_zeroes": true, 00:17:41.416 "flush": true, 00:17:41.416 "reset": true, 00:17:41.416 "compare": false, 00:17:41.416 "compare_and_write": false, 00:17:41.416 "abort": true, 00:17:41.416 "nvme_admin": false, 00:17:41.416 "nvme_io": false 00:17:41.416 }, 00:17:41.416 "memory_domains": [ 00:17:41.416 { 00:17:41.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.416 "dma_device_type": 2 00:17:41.416 } 00:17:41.416 ], 00:17:41.416 "driver_specific": {} 00:17:41.416 } 00:17:41.416 ] 00:17:41.416 13:03:00 -- common/autotest_common.sh@895 -- # return 0 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.416 13:03:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.695 13:03:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.695 "name": "Existed_Raid", 00:17:41.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.695 "strip_size_kb": 64, 00:17:41.695 "state": "configuring", 00:17:41.695 "raid_level": "raid0", 00:17:41.695 "superblock": false, 00:17:41.695 "num_base_bdevs": 4, 00:17:41.695 "num_base_bdevs_discovered": 1, 00:17:41.695 "num_base_bdevs_operational": 4, 00:17:41.695 "base_bdevs_list": [ 00:17:41.695 { 00:17:41.695 "name": "BaseBdev1", 00:17:41.695 "uuid": "3af733f4-1d9f-4a86-afda-7abc342ff1a4", 00:17:41.695 "is_configured": true, 00:17:41.695 "data_offset": 0, 00:17:41.695 "data_size": 65536 00:17:41.695 }, 00:17:41.695 { 00:17:41.695 "name": "BaseBdev2", 00:17:41.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.695 "is_configured": false, 00:17:41.695 "data_offset": 0, 00:17:41.695 "data_size": 0 00:17:41.695 }, 00:17:41.695 { 00:17:41.695 "name": "BaseBdev3", 00:17:41.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.695 "is_configured": false, 00:17:41.695 "data_offset": 0, 00:17:41.695 "data_size": 0 00:17:41.695 }, 00:17:41.695 { 00:17:41.695 "name": "BaseBdev4", 00:17:41.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.695 "is_configured": false, 00:17:41.695 "data_offset": 0, 00:17:41.695 "data_size": 0 00:17:41.695 } 00:17:41.695 ] 00:17:41.695 }' 00:17:41.695 13:03:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.695 13:03:00 -- common/autotest_common.sh@10 -- # set +x 00:17:42.262 13:03:01 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.520 [2024-06-11 13:03:01.267236] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.520 [2024-06-11 13:03:01.267413] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:42.520 13:03:01 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:42.520 13:03:01 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.777 [2024-06-11 13:03:01.463307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.777 [2024-06-11 13:03:01.465163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.777 [2024-06-11 13:03:01.465402] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.777 [2024-06-11 13:03:01.465538] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.777 [2024-06-11 13:03:01.465717] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.777 [2024-06-11 13:03:01.465835] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.777 [2024-06-11 13:03:01.465888] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.777 13:03:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.035 13:03:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.035 "name": "Existed_Raid", 00:17:43.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.035 "strip_size_kb": 64, 00:17:43.035 "state": "configuring", 00:17:43.035 "raid_level": "raid0", 00:17:43.035 "superblock": false, 00:17:43.035 "num_base_bdevs": 4, 00:17:43.035 "num_base_bdevs_discovered": 1, 00:17:43.035 "num_base_bdevs_operational": 4, 00:17:43.035 "base_bdevs_list": [ 00:17:43.035 { 00:17:43.035 "name": "BaseBdev1", 00:17:43.035 "uuid": "3af733f4-1d9f-4a86-afda-7abc342ff1a4", 00:17:43.035 "is_configured": true, 00:17:43.035 "data_offset": 0, 00:17:43.035 "data_size": 65536 00:17:43.035 }, 00:17:43.035 { 00:17:43.035 "name": "BaseBdev2", 00:17:43.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.035 "is_configured": false, 00:17:43.035 "data_offset": 0, 00:17:43.035 "data_size": 0 00:17:43.035 }, 00:17:43.035 { 00:17:43.035 "name": "BaseBdev3", 00:17:43.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.035 "is_configured": false, 00:17:43.035 "data_offset": 0, 00:17:43.035 "data_size": 0 00:17:43.035 }, 00:17:43.035 { 00:17:43.035 "name": "BaseBdev4", 00:17:43.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.035 "is_configured": false, 00:17:43.035 "data_offset": 0, 00:17:43.035 "data_size": 0 00:17:43.035 } 00:17:43.035 ] 00:17:43.035 }' 00:17:43.035 13:03:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.035 13:03:01 -- common/autotest_common.sh@10 -- # set +x 00:17:43.600 13:03:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:43.857 [2024-06-11 13:03:02.698061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.857 BaseBdev2 00:17:44.115 13:03:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:44.115 13:03:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:44.115 13:03:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:44.115 13:03:02 -- common/autotest_common.sh@889 -- # local i 00:17:44.115 13:03:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:44.115 13:03:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:44.115 13:03:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.115 13:03:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.373 [ 00:17:44.373 { 00:17:44.373 "name": "BaseBdev2", 00:17:44.373 "aliases": [ 00:17:44.373 "24c50c71-7517-4e16-8d75-44dad5284f5d" 00:17:44.373 ], 00:17:44.373 "product_name": "Malloc disk", 00:17:44.373 "block_size": 512, 00:17:44.373 "num_blocks": 65536, 00:17:44.373 "uuid": "24c50c71-7517-4e16-8d75-44dad5284f5d", 00:17:44.373 "assigned_rate_limits": { 00:17:44.373 "rw_ios_per_sec": 0, 00:17:44.373 "rw_mbytes_per_sec": 0, 00:17:44.373 "r_mbytes_per_sec": 0, 00:17:44.373 "w_mbytes_per_sec": 0 00:17:44.373 }, 00:17:44.373 "claimed": true, 00:17:44.373 "claim_type": "exclusive_write", 00:17:44.373 "zoned": false, 00:17:44.373 "supported_io_types": { 00:17:44.373 "read": true, 00:17:44.373 "write": true, 00:17:44.373 "unmap": true, 00:17:44.373 "write_zeroes": true, 00:17:44.373 "flush": true, 00:17:44.373 "reset": true, 00:17:44.373 "compare": false, 00:17:44.373 "compare_and_write": false, 00:17:44.373 "abort": true, 00:17:44.373 "nvme_admin": false, 00:17:44.373 "nvme_io": false 00:17:44.373 }, 00:17:44.373 "memory_domains": [ 00:17:44.373 { 00:17:44.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.373 "dma_device_type": 2 00:17:44.373 } 00:17:44.373 ], 00:17:44.373 "driver_specific": {} 00:17:44.373 } 00:17:44.373 ] 00:17:44.373 13:03:03 -- common/autotest_common.sh@895 -- # return 0 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.373 13:03:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.632 13:03:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.632 "name": "Existed_Raid", 00:17:44.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.632 "strip_size_kb": 64, 00:17:44.632 "state": "configuring", 00:17:44.632 "raid_level": "raid0", 00:17:44.632 "superblock": false, 00:17:44.632 "num_base_bdevs": 4, 00:17:44.632 "num_base_bdevs_discovered": 2, 00:17:44.632 "num_base_bdevs_operational": 4, 00:17:44.632 "base_bdevs_list": [ 00:17:44.632 { 00:17:44.632 "name": "BaseBdev1", 00:17:44.632 "uuid": "3af733f4-1d9f-4a86-afda-7abc342ff1a4", 00:17:44.632 "is_configured": true, 00:17:44.632 "data_offset": 0, 00:17:44.632 "data_size": 65536 00:17:44.632 }, 00:17:44.632 { 00:17:44.632 "name": "BaseBdev2", 00:17:44.632 "uuid": "24c50c71-7517-4e16-8d75-44dad5284f5d", 00:17:44.632 "is_configured": true, 00:17:44.632 "data_offset": 0, 00:17:44.632 "data_size": 65536 00:17:44.632 }, 00:17:44.632 { 00:17:44.632 "name": "BaseBdev3", 00:17:44.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.632 "is_configured": false, 00:17:44.632 "data_offset": 0, 00:17:44.632 "data_size": 0 00:17:44.632 }, 00:17:44.632 { 00:17:44.632 "name": "BaseBdev4", 00:17:44.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.632 "is_configured": false, 00:17:44.632 "data_offset": 0, 00:17:44.632 "data_size": 0 00:17:44.632 } 00:17:44.632 ] 00:17:44.632 }' 00:17:44.632 13:03:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.632 13:03:03 -- common/autotest_common.sh@10 -- # set +x 00:17:45.567 13:03:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.567 [2024-06-11 13:03:04.342756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.567 BaseBdev3 00:17:45.567 13:03:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:45.567 13:03:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:45.567 13:03:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.567 13:03:04 -- common/autotest_common.sh@889 -- # local i 00:17:45.567 13:03:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.567 13:03:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.567 13:03:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.825 13:03:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:46.084 [ 00:17:46.084 { 00:17:46.084 "name": "BaseBdev3", 00:17:46.084 "aliases": [ 00:17:46.084 "1bd1d02a-4f0b-4a2a-ae62-32d8893b5945" 00:17:46.084 ], 00:17:46.084 "product_name": "Malloc disk", 00:17:46.084 "block_size": 512, 00:17:46.084 "num_blocks": 65536, 00:17:46.084 "uuid": "1bd1d02a-4f0b-4a2a-ae62-32d8893b5945", 00:17:46.084 "assigned_rate_limits": { 00:17:46.084 "rw_ios_per_sec": 0, 00:17:46.084 "rw_mbytes_per_sec": 0, 00:17:46.084 "r_mbytes_per_sec": 0, 00:17:46.084 "w_mbytes_per_sec": 0 00:17:46.084 }, 00:17:46.084 "claimed": true, 00:17:46.084 "claim_type": "exclusive_write", 00:17:46.084 "zoned": false, 00:17:46.084 "supported_io_types": { 00:17:46.084 "read": true, 00:17:46.084 "write": true, 00:17:46.084 "unmap": true, 00:17:46.084 "write_zeroes": true, 00:17:46.084 "flush": true, 00:17:46.084 "reset": true, 00:17:46.084 "compare": false, 00:17:46.084 "compare_and_write": false, 00:17:46.084 "abort": true, 00:17:46.084 "nvme_admin": false, 00:17:46.084 "nvme_io": false 00:17:46.084 }, 00:17:46.084 "memory_domains": [ 00:17:46.084 { 00:17:46.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.084 "dma_device_type": 2 00:17:46.084 } 00:17:46.084 ], 00:17:46.084 "driver_specific": {} 00:17:46.084 } 00:17:46.084 ] 00:17:46.084 13:03:04 -- common/autotest_common.sh@895 -- # return 0 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.084 13:03:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.342 13:03:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.342 "name": "Existed_Raid", 00:17:46.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.342 "strip_size_kb": 64, 00:17:46.342 "state": "configuring", 00:17:46.342 "raid_level": "raid0", 00:17:46.342 "superblock": false, 00:17:46.342 "num_base_bdevs": 4, 00:17:46.342 "num_base_bdevs_discovered": 3, 00:17:46.342 "num_base_bdevs_operational": 4, 00:17:46.342 "base_bdevs_list": [ 00:17:46.342 { 00:17:46.342 "name": "BaseBdev1", 00:17:46.342 "uuid": "3af733f4-1d9f-4a86-afda-7abc342ff1a4", 00:17:46.342 "is_configured": true, 00:17:46.342 "data_offset": 0, 00:17:46.342 "data_size": 65536 00:17:46.342 }, 00:17:46.342 { 00:17:46.342 "name": "BaseBdev2", 00:17:46.342 "uuid": "24c50c71-7517-4e16-8d75-44dad5284f5d", 00:17:46.342 "is_configured": true, 00:17:46.342 "data_offset": 0, 00:17:46.342 "data_size": 65536 00:17:46.342 }, 00:17:46.342 { 00:17:46.342 "name": "BaseBdev3", 00:17:46.342 "uuid": "1bd1d02a-4f0b-4a2a-ae62-32d8893b5945", 00:17:46.342 "is_configured": true, 00:17:46.342 "data_offset": 0, 00:17:46.342 "data_size": 65536 00:17:46.342 }, 00:17:46.342 { 00:17:46.342 "name": "BaseBdev4", 00:17:46.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.342 "is_configured": false, 00:17:46.342 "data_offset": 0, 00:17:46.342 "data_size": 0 00:17:46.342 } 00:17:46.342 ] 00:17:46.342 }' 00:17:46.342 13:03:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.342 13:03:05 -- common/autotest_common.sh@10 -- # set +x 00:17:46.921 13:03:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:47.180 [2024-06-11 13:03:05.927673] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.180 [2024-06-11 13:03:05.927972] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:47.180 [2024-06-11 13:03:05.928013] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:47.180 [2024-06-11 13:03:05.928252] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:47.180 [2024-06-11 13:03:05.928733] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:47.180 [2024-06-11 13:03:05.928889] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:47.180 [2024-06-11 13:03:05.929273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.180 BaseBdev4 00:17:47.180 13:03:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:47.180 13:03:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:47.180 13:03:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:47.180 13:03:05 -- common/autotest_common.sh@889 -- # local i 00:17:47.180 13:03:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:47.180 13:03:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:47.180 13:03:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.439 13:03:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:47.697 [ 00:17:47.697 { 00:17:47.697 "name": "BaseBdev4", 00:17:47.697 "aliases": [ 00:17:47.697 "a4bd454c-4852-44fc-b268-7cd1ee8928a7" 00:17:47.697 ], 00:17:47.697 "product_name": "Malloc disk", 00:17:47.697 "block_size": 512, 00:17:47.697 "num_blocks": 65536, 00:17:47.697 "uuid": "a4bd454c-4852-44fc-b268-7cd1ee8928a7", 00:17:47.697 "assigned_rate_limits": { 00:17:47.697 "rw_ios_per_sec": 0, 00:17:47.697 "rw_mbytes_per_sec": 0, 00:17:47.697 "r_mbytes_per_sec": 0, 00:17:47.697 "w_mbytes_per_sec": 0 00:17:47.697 }, 00:17:47.697 "claimed": true, 00:17:47.697 "claim_type": "exclusive_write", 00:17:47.697 "zoned": false, 00:17:47.698 "supported_io_types": { 00:17:47.698 "read": true, 00:17:47.698 "write": true, 00:17:47.698 "unmap": true, 00:17:47.698 "write_zeroes": true, 00:17:47.698 "flush": true, 00:17:47.698 "reset": true, 00:17:47.698 "compare": false, 00:17:47.698 "compare_and_write": false, 00:17:47.698 "abort": true, 00:17:47.698 "nvme_admin": false, 00:17:47.698 "nvme_io": false 00:17:47.698 }, 00:17:47.698 "memory_domains": [ 00:17:47.698 { 00:17:47.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.698 "dma_device_type": 2 00:17:47.698 } 00:17:47.698 ], 00:17:47.698 "driver_specific": {} 00:17:47.698 } 00:17:47.698 ] 00:17:47.698 13:03:06 -- common/autotest_common.sh@895 -- # return 0 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.698 13:03:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.956 13:03:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.956 "name": "Existed_Raid", 00:17:47.956 "uuid": "b8fe0dba-89a4-4c17-9ec2-0f8f735a696f", 00:17:47.957 "strip_size_kb": 64, 00:17:47.957 "state": "online", 00:17:47.957 "raid_level": "raid0", 00:17:47.957 "superblock": false, 00:17:47.957 "num_base_bdevs": 4, 00:17:47.957 "num_base_bdevs_discovered": 4, 00:17:47.957 "num_base_bdevs_operational": 4, 00:17:47.957 "base_bdevs_list": [ 00:17:47.957 { 00:17:47.957 "name": "BaseBdev1", 00:17:47.957 "uuid": "3af733f4-1d9f-4a86-afda-7abc342ff1a4", 00:17:47.957 "is_configured": true, 00:17:47.957 "data_offset": 0, 00:17:47.957 "data_size": 65536 00:17:47.957 }, 00:17:47.957 { 00:17:47.957 "name": "BaseBdev2", 00:17:47.957 "uuid": "24c50c71-7517-4e16-8d75-44dad5284f5d", 00:17:47.957 "is_configured": true, 00:17:47.957 "data_offset": 0, 00:17:47.957 "data_size": 65536 00:17:47.957 }, 00:17:47.957 { 00:17:47.957 "name": "BaseBdev3", 00:17:47.957 "uuid": "1bd1d02a-4f0b-4a2a-ae62-32d8893b5945", 00:17:47.957 "is_configured": true, 00:17:47.957 "data_offset": 0, 00:17:47.957 "data_size": 65536 00:17:47.957 }, 00:17:47.957 { 00:17:47.957 "name": "BaseBdev4", 00:17:47.957 "uuid": "a4bd454c-4852-44fc-b268-7cd1ee8928a7", 00:17:47.957 "is_configured": true, 00:17:47.957 "data_offset": 0, 00:17:47.957 "data_size": 65536 00:17:47.957 } 00:17:47.957 ] 00:17:47.957 }' 00:17:47.957 13:03:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.957 13:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:48.524 13:03:07 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.783 [2024-06-11 13:03:07.512197] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.783 [2024-06-11 13:03:07.512372] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.783 [2024-06-11 13:03:07.512574] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.783 13:03:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.042 13:03:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.042 "name": "Existed_Raid", 00:17:49.042 "uuid": "b8fe0dba-89a4-4c17-9ec2-0f8f735a696f", 00:17:49.042 "strip_size_kb": 64, 00:17:49.042 "state": "offline", 00:17:49.042 "raid_level": "raid0", 00:17:49.042 "superblock": false, 00:17:49.042 "num_base_bdevs": 4, 00:17:49.042 "num_base_bdevs_discovered": 3, 00:17:49.042 "num_base_bdevs_operational": 3, 00:17:49.042 "base_bdevs_list": [ 00:17:49.042 { 00:17:49.042 "name": null, 00:17:49.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.042 "is_configured": false, 00:17:49.042 "data_offset": 0, 00:17:49.042 "data_size": 65536 00:17:49.042 }, 00:17:49.042 { 00:17:49.042 "name": "BaseBdev2", 00:17:49.042 "uuid": "24c50c71-7517-4e16-8d75-44dad5284f5d", 00:17:49.042 "is_configured": true, 00:17:49.042 "data_offset": 0, 00:17:49.042 "data_size": 65536 00:17:49.042 }, 00:17:49.042 { 00:17:49.042 "name": "BaseBdev3", 00:17:49.042 "uuid": "1bd1d02a-4f0b-4a2a-ae62-32d8893b5945", 00:17:49.042 "is_configured": true, 00:17:49.042 "data_offset": 0, 00:17:49.042 "data_size": 65536 00:17:49.042 }, 00:17:49.042 { 00:17:49.042 "name": "BaseBdev4", 00:17:49.042 "uuid": "a4bd454c-4852-44fc-b268-7cd1ee8928a7", 00:17:49.042 "is_configured": true, 00:17:49.042 "data_offset": 0, 00:17:49.042 "data_size": 65536 00:17:49.042 } 00:17:49.042 ] 00:17:49.042 }' 00:17:49.042 13:03:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.042 13:03:07 -- common/autotest_common.sh@10 -- # set +x 00:17:49.609 13:03:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:49.609 13:03:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.609 13:03:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.609 13:03:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.867 13:03:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.867 13:03:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.867 13:03:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:50.125 [2024-06-11 13:03:08.869051] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.125 13:03:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.125 13:03:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.125 13:03:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.125 13:03:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.384 13:03:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.384 13:03:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.384 13:03:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:50.642 [2024-06-11 13:03:09.378100] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.642 13:03:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.642 13:03:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.642 13:03:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.642 13:03:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.899 13:03:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.899 13:03:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.899 13:03:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:51.157 [2024-06-11 13:03:09.945019] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:51.157 [2024-06-11 13:03:09.945195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:51.416 13:03:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:51.416 13:03:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:51.416 13:03:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.416 13:03:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:51.416 13:03:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:51.416 13:03:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:51.416 13:03:10 -- bdev/bdev_raid.sh@287 -- # killprocess 121423 00:17:51.416 13:03:10 -- common/autotest_common.sh@926 -- # '[' -z 121423 ']' 00:17:51.416 13:03:10 -- common/autotest_common.sh@930 -- # kill -0 121423 00:17:51.416 13:03:10 -- common/autotest_common.sh@931 -- # uname 00:17:51.416 13:03:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:51.416 13:03:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121423 00:17:51.416 13:03:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:51.416 13:03:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:51.416 13:03:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121423' 00:17:51.416 killing process with pid 121423 00:17:51.416 13:03:10 -- common/autotest_common.sh@945 -- # kill 121423 00:17:51.416 [2024-06-11 13:03:10.232381] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.416 13:03:10 -- common/autotest_common.sh@950 -- # wait 121423 00:17:51.416 [2024-06-11 13:03:10.232586] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.352 ************************************ 00:17:52.352 END TEST raid_state_function_test 00:17:52.352 ************************************ 00:17:52.352 13:03:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:52.352 00:17:52.352 real 0m14.398s 00:17:52.352 user 0m26.078s 00:17:52.352 sys 0m1.498s 00:17:52.353 13:03:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:52.353 13:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:52.611 13:03:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:52.611 13:03:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:52.611 13:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:52.611 ************************************ 00:17:52.611 START TEST raid_state_function_test_sb 00:17:52.611 ************************************ 00:17:52.611 13:03:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.611 13:03:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=121885 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:52.612 Process raid pid: 121885 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121885' 00:17:52.612 13:03:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121885 /var/tmp/spdk-raid.sock 00:17:52.612 13:03:11 -- common/autotest_common.sh@819 -- # '[' -z 121885 ']' 00:17:52.612 13:03:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:52.612 13:03:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:52.612 13:03:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:52.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:52.612 13:03:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:52.612 13:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 [2024-06-11 13:03:11.299732] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:52.612 [2024-06-11 13:03:11.300147] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.870 [2024-06-11 13:03:11.462712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.870 [2024-06-11 13:03:11.644677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.129 [2024-06-11 13:03:11.813273] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.697 13:03:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:53.697 13:03:12 -- common/autotest_common.sh@852 -- # return 0 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:53.697 [2024-06-11 13:03:12.414980] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.697 [2024-06-11 13:03:12.415181] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.697 [2024-06-11 13:03:12.415303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.697 [2024-06-11 13:03:12.415369] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.697 [2024-06-11 13:03:12.415455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.697 [2024-06-11 13:03:12.415608] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.697 [2024-06-11 13:03:12.415702] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:53.697 [2024-06-11 13:03:12.415757] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.697 13:03:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.955 13:03:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.955 "name": "Existed_Raid", 00:17:53.955 "uuid": "9ef32bf6-b4b2-4795-9e0e-7ada0caddec7", 00:17:53.955 "strip_size_kb": 64, 00:17:53.955 "state": "configuring", 00:17:53.955 "raid_level": "raid0", 00:17:53.955 "superblock": true, 00:17:53.955 "num_base_bdevs": 4, 00:17:53.955 "num_base_bdevs_discovered": 0, 00:17:53.955 "num_base_bdevs_operational": 4, 00:17:53.955 "base_bdevs_list": [ 00:17:53.955 { 00:17:53.955 "name": "BaseBdev1", 00:17:53.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.955 "is_configured": false, 00:17:53.955 "data_offset": 0, 00:17:53.955 "data_size": 0 00:17:53.955 }, 00:17:53.955 { 00:17:53.955 "name": "BaseBdev2", 00:17:53.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.955 "is_configured": false, 00:17:53.955 "data_offset": 0, 00:17:53.955 "data_size": 0 00:17:53.955 }, 00:17:53.955 { 00:17:53.955 "name": "BaseBdev3", 00:17:53.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.955 "is_configured": false, 00:17:53.956 "data_offset": 0, 00:17:53.956 "data_size": 0 00:17:53.956 }, 00:17:53.956 { 00:17:53.956 "name": "BaseBdev4", 00:17:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.956 "is_configured": false, 00:17:53.956 "data_offset": 0, 00:17:53.956 "data_size": 0 00:17:53.956 } 00:17:53.956 ] 00:17:53.956 }' 00:17:53.956 13:03:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.956 13:03:12 -- common/autotest_common.sh@10 -- # set +x 00:17:54.522 13:03:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.781 [2024-06-11 13:03:13.443075] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.781 [2024-06-11 13:03:13.443229] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:54.781 13:03:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:55.040 [2024-06-11 13:03:13.623167] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.040 [2024-06-11 13:03:13.623362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.040 [2024-06-11 13:03:13.623495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.040 [2024-06-11 13:03:13.623672] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.040 [2024-06-11 13:03:13.623787] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.040 [2024-06-11 13:03:13.623915] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.040 [2024-06-11 13:03:13.624046] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.040 [2024-06-11 13:03:13.624108] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.040 13:03:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:55.040 [2024-06-11 13:03:13.840417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.040 BaseBdev1 00:17:55.040 13:03:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:55.040 13:03:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:55.040 13:03:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:55.040 13:03:13 -- common/autotest_common.sh@889 -- # local i 00:17:55.040 13:03:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:55.040 13:03:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:55.040 13:03:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.299 13:03:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.558 [ 00:17:55.558 { 00:17:55.558 "name": "BaseBdev1", 00:17:55.558 "aliases": [ 00:17:55.558 "1ca026ba-d74c-45e3-9055-436cce0cedfc" 00:17:55.558 ], 00:17:55.558 "product_name": "Malloc disk", 00:17:55.558 "block_size": 512, 00:17:55.558 "num_blocks": 65536, 00:17:55.558 "uuid": "1ca026ba-d74c-45e3-9055-436cce0cedfc", 00:17:55.558 "assigned_rate_limits": { 00:17:55.558 "rw_ios_per_sec": 0, 00:17:55.558 "rw_mbytes_per_sec": 0, 00:17:55.558 "r_mbytes_per_sec": 0, 00:17:55.558 "w_mbytes_per_sec": 0 00:17:55.558 }, 00:17:55.558 "claimed": true, 00:17:55.558 "claim_type": "exclusive_write", 00:17:55.558 "zoned": false, 00:17:55.558 "supported_io_types": { 00:17:55.558 "read": true, 00:17:55.558 "write": true, 00:17:55.558 "unmap": true, 00:17:55.558 "write_zeroes": true, 00:17:55.558 "flush": true, 00:17:55.558 "reset": true, 00:17:55.558 "compare": false, 00:17:55.558 "compare_and_write": false, 00:17:55.558 "abort": true, 00:17:55.558 "nvme_admin": false, 00:17:55.558 "nvme_io": false 00:17:55.558 }, 00:17:55.558 "memory_domains": [ 00:17:55.558 { 00:17:55.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.558 "dma_device_type": 2 00:17:55.558 } 00:17:55.558 ], 00:17:55.558 "driver_specific": {} 00:17:55.558 } 00:17:55.558 ] 00:17:55.558 13:03:14 -- common/autotest_common.sh@895 -- # return 0 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.558 13:03:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.817 13:03:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.817 "name": "Existed_Raid", 00:17:55.817 "uuid": "91464f9a-f4c2-43ed-9a8d-da737b7dfc94", 00:17:55.817 "strip_size_kb": 64, 00:17:55.817 "state": "configuring", 00:17:55.817 "raid_level": "raid0", 00:17:55.817 "superblock": true, 00:17:55.817 "num_base_bdevs": 4, 00:17:55.817 "num_base_bdevs_discovered": 1, 00:17:55.817 "num_base_bdevs_operational": 4, 00:17:55.817 "base_bdevs_list": [ 00:17:55.817 { 00:17:55.817 "name": "BaseBdev1", 00:17:55.817 "uuid": "1ca026ba-d74c-45e3-9055-436cce0cedfc", 00:17:55.817 "is_configured": true, 00:17:55.817 "data_offset": 2048, 00:17:55.817 "data_size": 63488 00:17:55.817 }, 00:17:55.817 { 00:17:55.817 "name": "BaseBdev2", 00:17:55.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.817 "is_configured": false, 00:17:55.817 "data_offset": 0, 00:17:55.817 "data_size": 0 00:17:55.817 }, 00:17:55.817 { 00:17:55.817 "name": "BaseBdev3", 00:17:55.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.817 "is_configured": false, 00:17:55.817 "data_offset": 0, 00:17:55.817 "data_size": 0 00:17:55.817 }, 00:17:55.817 { 00:17:55.817 "name": "BaseBdev4", 00:17:55.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.817 "is_configured": false, 00:17:55.817 "data_offset": 0, 00:17:55.817 "data_size": 0 00:17:55.817 } 00:17:55.817 ] 00:17:55.817 }' 00:17:55.817 13:03:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.817 13:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:56.384 13:03:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:56.643 [2024-06-11 13:03:15.316957] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.643 [2024-06-11 13:03:15.317193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:56.643 13:03:15 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:56.643 13:03:15 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:56.903 13:03:15 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:57.161 BaseBdev1 00:17:57.161 13:03:15 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:57.161 13:03:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:57.161 13:03:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:57.161 13:03:15 -- common/autotest_common.sh@889 -- # local i 00:17:57.161 13:03:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:57.161 13:03:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:57.161 13:03:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.420 13:03:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:57.679 [ 00:17:57.679 { 00:17:57.679 "name": "BaseBdev1", 00:17:57.679 "aliases": [ 00:17:57.679 "b97fd592-dc30-4d5d-b259-4f9768e48dc8" 00:17:57.679 ], 00:17:57.679 "product_name": "Malloc disk", 00:17:57.679 "block_size": 512, 00:17:57.679 "num_blocks": 65536, 00:17:57.679 "uuid": "b97fd592-dc30-4d5d-b259-4f9768e48dc8", 00:17:57.679 "assigned_rate_limits": { 00:17:57.679 "rw_ios_per_sec": 0, 00:17:57.679 "rw_mbytes_per_sec": 0, 00:17:57.679 "r_mbytes_per_sec": 0, 00:17:57.679 "w_mbytes_per_sec": 0 00:17:57.679 }, 00:17:57.679 "claimed": false, 00:17:57.679 "zoned": false, 00:17:57.679 "supported_io_types": { 00:17:57.679 "read": true, 00:17:57.679 "write": true, 00:17:57.679 "unmap": true, 00:17:57.679 "write_zeroes": true, 00:17:57.679 "flush": true, 00:17:57.679 "reset": true, 00:17:57.679 "compare": false, 00:17:57.679 "compare_and_write": false, 00:17:57.679 "abort": true, 00:17:57.679 "nvme_admin": false, 00:17:57.679 "nvme_io": false 00:17:57.679 }, 00:17:57.679 "memory_domains": [ 00:17:57.679 { 00:17:57.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.679 "dma_device_type": 2 00:17:57.679 } 00:17:57.679 ], 00:17:57.679 "driver_specific": {} 00:17:57.679 } 00:17:57.679 ] 00:17:57.679 13:03:16 -- common/autotest_common.sh@895 -- # return 0 00:17:57.679 13:03:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:57.679 [2024-06-11 13:03:16.511982] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.679 [2024-06-11 13:03:16.515109] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.679 [2024-06-11 13:03:16.515303] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.679 [2024-06-11 13:03:16.515403] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.679 [2024-06-11 13:03:16.515460] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.679 [2024-06-11 13:03:16.515546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:57.679 [2024-06-11 13:03:16.515676] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.939 "name": "Existed_Raid", 00:17:57.939 "uuid": "152b68a2-1974-44e3-bc3c-67db62c9fe1f", 00:17:57.939 "strip_size_kb": 64, 00:17:57.939 "state": "configuring", 00:17:57.939 "raid_level": "raid0", 00:17:57.939 "superblock": true, 00:17:57.939 "num_base_bdevs": 4, 00:17:57.939 "num_base_bdevs_discovered": 1, 00:17:57.939 "num_base_bdevs_operational": 4, 00:17:57.939 "base_bdevs_list": [ 00:17:57.939 { 00:17:57.939 "name": "BaseBdev1", 00:17:57.939 "uuid": "b97fd592-dc30-4d5d-b259-4f9768e48dc8", 00:17:57.939 "is_configured": true, 00:17:57.939 "data_offset": 2048, 00:17:57.939 "data_size": 63488 00:17:57.939 }, 00:17:57.939 { 00:17:57.939 "name": "BaseBdev2", 00:17:57.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.939 "is_configured": false, 00:17:57.939 "data_offset": 0, 00:17:57.939 "data_size": 0 00:17:57.939 }, 00:17:57.939 { 00:17:57.939 "name": "BaseBdev3", 00:17:57.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.939 "is_configured": false, 00:17:57.939 "data_offset": 0, 00:17:57.939 "data_size": 0 00:17:57.939 }, 00:17:57.939 { 00:17:57.939 "name": "BaseBdev4", 00:17:57.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.939 "is_configured": false, 00:17:57.939 "data_offset": 0, 00:17:57.939 "data_size": 0 00:17:57.939 } 00:17:57.939 ] 00:17:57.939 }' 00:17:57.939 13:03:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.939 13:03:16 -- common/autotest_common.sh@10 -- # set +x 00:17:58.877 13:03:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:58.877 [2024-06-11 13:03:17.711904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.877 BaseBdev2 00:17:59.136 13:03:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:59.136 13:03:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:59.136 13:03:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:59.136 13:03:17 -- common/autotest_common.sh@889 -- # local i 00:17:59.136 13:03:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:59.136 13:03:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:59.136 13:03:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.395 13:03:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:59.395 [ 00:17:59.395 { 00:17:59.395 "name": "BaseBdev2", 00:17:59.395 "aliases": [ 00:17:59.395 "c489234f-2bac-46b8-a584-c11834d4200a" 00:17:59.395 ], 00:17:59.395 "product_name": "Malloc disk", 00:17:59.395 "block_size": 512, 00:17:59.395 "num_blocks": 65536, 00:17:59.395 "uuid": "c489234f-2bac-46b8-a584-c11834d4200a", 00:17:59.395 "assigned_rate_limits": { 00:17:59.395 "rw_ios_per_sec": 0, 00:17:59.395 "rw_mbytes_per_sec": 0, 00:17:59.395 "r_mbytes_per_sec": 0, 00:17:59.395 "w_mbytes_per_sec": 0 00:17:59.395 }, 00:17:59.395 "claimed": true, 00:17:59.395 "claim_type": "exclusive_write", 00:17:59.395 "zoned": false, 00:17:59.395 "supported_io_types": { 00:17:59.395 "read": true, 00:17:59.395 "write": true, 00:17:59.395 "unmap": true, 00:17:59.395 "write_zeroes": true, 00:17:59.395 "flush": true, 00:17:59.395 "reset": true, 00:17:59.395 "compare": false, 00:17:59.395 "compare_and_write": false, 00:17:59.395 "abort": true, 00:17:59.395 "nvme_admin": false, 00:17:59.395 "nvme_io": false 00:17:59.396 }, 00:17:59.396 "memory_domains": [ 00:17:59.396 { 00:17:59.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.396 "dma_device_type": 2 00:17:59.396 } 00:17:59.396 ], 00:17:59.396 "driver_specific": {} 00:17:59.396 } 00:17:59.396 ] 00:17:59.396 13:03:18 -- common/autotest_common.sh@895 -- # return 0 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.396 13:03:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.654 13:03:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.654 "name": "Existed_Raid", 00:17:59.654 "uuid": "152b68a2-1974-44e3-bc3c-67db62c9fe1f", 00:17:59.654 "strip_size_kb": 64, 00:17:59.654 "state": "configuring", 00:17:59.654 "raid_level": "raid0", 00:17:59.654 "superblock": true, 00:17:59.654 "num_base_bdevs": 4, 00:17:59.654 "num_base_bdevs_discovered": 2, 00:17:59.654 "num_base_bdevs_operational": 4, 00:17:59.654 "base_bdevs_list": [ 00:17:59.654 { 00:17:59.654 "name": "BaseBdev1", 00:17:59.654 "uuid": "b97fd592-dc30-4d5d-b259-4f9768e48dc8", 00:17:59.654 "is_configured": true, 00:17:59.654 "data_offset": 2048, 00:17:59.654 "data_size": 63488 00:17:59.654 }, 00:17:59.654 { 00:17:59.654 "name": "BaseBdev2", 00:17:59.654 "uuid": "c489234f-2bac-46b8-a584-c11834d4200a", 00:17:59.654 "is_configured": true, 00:17:59.654 "data_offset": 2048, 00:17:59.654 "data_size": 63488 00:17:59.654 }, 00:17:59.654 { 00:17:59.654 "name": "BaseBdev3", 00:17:59.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.654 "is_configured": false, 00:17:59.654 "data_offset": 0, 00:17:59.654 "data_size": 0 00:17:59.654 }, 00:17:59.654 { 00:17:59.654 "name": "BaseBdev4", 00:17:59.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.654 "is_configured": false, 00:17:59.654 "data_offset": 0, 00:17:59.654 "data_size": 0 00:17:59.654 } 00:17:59.654 ] 00:17:59.654 }' 00:17:59.654 13:03:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.654 13:03:18 -- common/autotest_common.sh@10 -- # set +x 00:18:00.588 13:03:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:00.588 [2024-06-11 13:03:19.371877] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:00.588 BaseBdev3 00:18:00.588 13:03:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:00.588 13:03:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:00.588 13:03:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:00.588 13:03:19 -- common/autotest_common.sh@889 -- # local i 00:18:00.588 13:03:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:00.588 13:03:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:00.588 13:03:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:00.847 13:03:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:01.105 [ 00:18:01.105 { 00:18:01.105 "name": "BaseBdev3", 00:18:01.105 "aliases": [ 00:18:01.105 "73ef34d2-89b6-4ba9-9983-45bfdfbe052c" 00:18:01.105 ], 00:18:01.105 "product_name": "Malloc disk", 00:18:01.105 "block_size": 512, 00:18:01.105 "num_blocks": 65536, 00:18:01.105 "uuid": "73ef34d2-89b6-4ba9-9983-45bfdfbe052c", 00:18:01.105 "assigned_rate_limits": { 00:18:01.105 "rw_ios_per_sec": 0, 00:18:01.105 "rw_mbytes_per_sec": 0, 00:18:01.105 "r_mbytes_per_sec": 0, 00:18:01.105 "w_mbytes_per_sec": 0 00:18:01.105 }, 00:18:01.105 "claimed": true, 00:18:01.105 "claim_type": "exclusive_write", 00:18:01.105 "zoned": false, 00:18:01.105 "supported_io_types": { 00:18:01.105 "read": true, 00:18:01.105 "write": true, 00:18:01.105 "unmap": true, 00:18:01.105 "write_zeroes": true, 00:18:01.105 "flush": true, 00:18:01.105 "reset": true, 00:18:01.105 "compare": false, 00:18:01.105 "compare_and_write": false, 00:18:01.105 "abort": true, 00:18:01.105 "nvme_admin": false, 00:18:01.105 "nvme_io": false 00:18:01.105 }, 00:18:01.105 "memory_domains": [ 00:18:01.105 { 00:18:01.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.105 "dma_device_type": 2 00:18:01.105 } 00:18:01.105 ], 00:18:01.105 "driver_specific": {} 00:18:01.105 } 00:18:01.105 ] 00:18:01.105 13:03:19 -- common/autotest_common.sh@895 -- # return 0 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.105 13:03:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.363 13:03:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.363 "name": "Existed_Raid", 00:18:01.363 "uuid": "152b68a2-1974-44e3-bc3c-67db62c9fe1f", 00:18:01.363 "strip_size_kb": 64, 00:18:01.363 "state": "configuring", 00:18:01.363 "raid_level": "raid0", 00:18:01.363 "superblock": true, 00:18:01.363 "num_base_bdevs": 4, 00:18:01.363 "num_base_bdevs_discovered": 3, 00:18:01.363 "num_base_bdevs_operational": 4, 00:18:01.363 "base_bdevs_list": [ 00:18:01.363 { 00:18:01.363 "name": "BaseBdev1", 00:18:01.363 "uuid": "b97fd592-dc30-4d5d-b259-4f9768e48dc8", 00:18:01.363 "is_configured": true, 00:18:01.363 "data_offset": 2048, 00:18:01.363 "data_size": 63488 00:18:01.363 }, 00:18:01.363 { 00:18:01.363 "name": "BaseBdev2", 00:18:01.363 "uuid": "c489234f-2bac-46b8-a584-c11834d4200a", 00:18:01.363 "is_configured": true, 00:18:01.363 "data_offset": 2048, 00:18:01.363 "data_size": 63488 00:18:01.363 }, 00:18:01.363 { 00:18:01.363 "name": "BaseBdev3", 00:18:01.363 "uuid": "73ef34d2-89b6-4ba9-9983-45bfdfbe052c", 00:18:01.363 "is_configured": true, 00:18:01.363 "data_offset": 2048, 00:18:01.363 "data_size": 63488 00:18:01.363 }, 00:18:01.363 { 00:18:01.363 "name": "BaseBdev4", 00:18:01.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.363 "is_configured": false, 00:18:01.363 "data_offset": 0, 00:18:01.363 "data_size": 0 00:18:01.363 } 00:18:01.363 ] 00:18:01.363 }' 00:18:01.363 13:03:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.363 13:03:20 -- common/autotest_common.sh@10 -- # set +x 00:18:01.928 13:03:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:02.186 [2024-06-11 13:03:20.974198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:02.186 [2024-06-11 13:03:20.974682] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:02.186 [2024-06-11 13:03:20.974847] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:02.186 BaseBdev4 00:18:02.186 [2024-06-11 13:03:20.975015] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:02.186 [2024-06-11 13:03:20.975364] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:02.186 [2024-06-11 13:03:20.975510] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:02.186 [2024-06-11 13:03:20.975743] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.186 13:03:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:02.186 13:03:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:02.186 13:03:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:02.186 13:03:20 -- common/autotest_common.sh@889 -- # local i 00:18:02.186 13:03:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:02.186 13:03:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:02.186 13:03:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:02.444 13:03:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:02.702 [ 00:18:02.702 { 00:18:02.702 "name": "BaseBdev4", 00:18:02.702 "aliases": [ 00:18:02.702 "b0b255d3-c619-4853-9349-872c6c351012" 00:18:02.702 ], 00:18:02.702 "product_name": "Malloc disk", 00:18:02.702 "block_size": 512, 00:18:02.702 "num_blocks": 65536, 00:18:02.702 "uuid": "b0b255d3-c619-4853-9349-872c6c351012", 00:18:02.702 "assigned_rate_limits": { 00:18:02.702 "rw_ios_per_sec": 0, 00:18:02.702 "rw_mbytes_per_sec": 0, 00:18:02.702 "r_mbytes_per_sec": 0, 00:18:02.702 "w_mbytes_per_sec": 0 00:18:02.702 }, 00:18:02.702 "claimed": true, 00:18:02.702 "claim_type": "exclusive_write", 00:18:02.702 "zoned": false, 00:18:02.702 "supported_io_types": { 00:18:02.702 "read": true, 00:18:02.702 "write": true, 00:18:02.702 "unmap": true, 00:18:02.702 "write_zeroes": true, 00:18:02.702 "flush": true, 00:18:02.702 "reset": true, 00:18:02.702 "compare": false, 00:18:02.702 "compare_and_write": false, 00:18:02.702 "abort": true, 00:18:02.702 "nvme_admin": false, 00:18:02.702 "nvme_io": false 00:18:02.702 }, 00:18:02.702 "memory_domains": [ 00:18:02.702 { 00:18:02.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.702 "dma_device_type": 2 00:18:02.702 } 00:18:02.702 ], 00:18:02.702 "driver_specific": {} 00:18:02.702 } 00:18:02.702 ] 00:18:02.702 13:03:21 -- common/autotest_common.sh@895 -- # return 0 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.702 13:03:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.960 13:03:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.960 "name": "Existed_Raid", 00:18:02.960 "uuid": "152b68a2-1974-44e3-bc3c-67db62c9fe1f", 00:18:02.960 "strip_size_kb": 64, 00:18:02.960 "state": "online", 00:18:02.961 "raid_level": "raid0", 00:18:02.961 "superblock": true, 00:18:02.961 "num_base_bdevs": 4, 00:18:02.961 "num_base_bdevs_discovered": 4, 00:18:02.961 "num_base_bdevs_operational": 4, 00:18:02.961 "base_bdevs_list": [ 00:18:02.961 { 00:18:02.961 "name": "BaseBdev1", 00:18:02.961 "uuid": "b97fd592-dc30-4d5d-b259-4f9768e48dc8", 00:18:02.961 "is_configured": true, 00:18:02.961 "data_offset": 2048, 00:18:02.961 "data_size": 63488 00:18:02.961 }, 00:18:02.961 { 00:18:02.961 "name": "BaseBdev2", 00:18:02.961 "uuid": "c489234f-2bac-46b8-a584-c11834d4200a", 00:18:02.961 "is_configured": true, 00:18:02.961 "data_offset": 2048, 00:18:02.961 "data_size": 63488 00:18:02.961 }, 00:18:02.961 { 00:18:02.961 "name": "BaseBdev3", 00:18:02.961 "uuid": "73ef34d2-89b6-4ba9-9983-45bfdfbe052c", 00:18:02.961 "is_configured": true, 00:18:02.961 "data_offset": 2048, 00:18:02.961 "data_size": 63488 00:18:02.961 }, 00:18:02.961 { 00:18:02.961 "name": "BaseBdev4", 00:18:02.961 "uuid": "b0b255d3-c619-4853-9349-872c6c351012", 00:18:02.961 "is_configured": true, 00:18:02.961 "data_offset": 2048, 00:18:02.961 "data_size": 63488 00:18:02.961 } 00:18:02.961 ] 00:18:02.961 }' 00:18:02.961 13:03:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.961 13:03:21 -- common/autotest_common.sh@10 -- # set +x 00:18:03.526 13:03:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:03.784 [2024-06-11 13:03:22.506648] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.784 [2024-06-11 13:03:22.506858] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.784 [2024-06-11 13:03:22.507038] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.784 13:03:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.042 13:03:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.042 "name": "Existed_Raid", 00:18:04.042 "uuid": "152b68a2-1974-44e3-bc3c-67db62c9fe1f", 00:18:04.042 "strip_size_kb": 64, 00:18:04.042 "state": "offline", 00:18:04.042 "raid_level": "raid0", 00:18:04.042 "superblock": true, 00:18:04.042 "num_base_bdevs": 4, 00:18:04.042 "num_base_bdevs_discovered": 3, 00:18:04.042 "num_base_bdevs_operational": 3, 00:18:04.042 "base_bdevs_list": [ 00:18:04.042 { 00:18:04.042 "name": null, 00:18:04.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.042 "is_configured": false, 00:18:04.042 "data_offset": 2048, 00:18:04.042 "data_size": 63488 00:18:04.042 }, 00:18:04.042 { 00:18:04.042 "name": "BaseBdev2", 00:18:04.042 "uuid": "c489234f-2bac-46b8-a584-c11834d4200a", 00:18:04.042 "is_configured": true, 00:18:04.042 "data_offset": 2048, 00:18:04.042 "data_size": 63488 00:18:04.042 }, 00:18:04.042 { 00:18:04.042 "name": "BaseBdev3", 00:18:04.042 "uuid": "73ef34d2-89b6-4ba9-9983-45bfdfbe052c", 00:18:04.042 "is_configured": true, 00:18:04.042 "data_offset": 2048, 00:18:04.042 "data_size": 63488 00:18:04.042 }, 00:18:04.042 { 00:18:04.042 "name": "BaseBdev4", 00:18:04.042 "uuid": "b0b255d3-c619-4853-9349-872c6c351012", 00:18:04.042 "is_configured": true, 00:18:04.042 "data_offset": 2048, 00:18:04.042 "data_size": 63488 00:18:04.042 } 00:18:04.042 ] 00:18:04.042 }' 00:18:04.042 13:03:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.042 13:03:22 -- common/autotest_common.sh@10 -- # set +x 00:18:04.978 13:03:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:04.978 13:03:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:04.978 13:03:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.978 13:03:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:04.978 13:03:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:04.978 13:03:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.978 13:03:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:05.236 [2024-06-11 13:03:24.026282] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:05.493 13:03:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:05.493 13:03:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:05.493 13:03:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.493 13:03:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:05.751 13:03:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:05.751 13:03:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:05.751 13:03:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:05.751 [2024-06-11 13:03:24.570870] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:06.009 13:03:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:06.009 13:03:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:06.009 13:03:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.009 13:03:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:06.268 13:03:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:06.268 13:03:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:06.268 13:03:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:06.268 [2024-06-11 13:03:25.034267] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:06.268 [2024-06-11 13:03:25.034440] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:06.527 13:03:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:06.527 13:03:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:06.527 13:03:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.527 13:03:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:06.527 13:03:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:06.527 13:03:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:06.527 13:03:25 -- bdev/bdev_raid.sh@287 -- # killprocess 121885 00:18:06.527 13:03:25 -- common/autotest_common.sh@926 -- # '[' -z 121885 ']' 00:18:06.527 13:03:25 -- common/autotest_common.sh@930 -- # kill -0 121885 00:18:06.527 13:03:25 -- common/autotest_common.sh@931 -- # uname 00:18:06.527 13:03:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:06.527 13:03:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121885 00:18:06.527 killing process with pid 121885 00:18:06.527 13:03:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:06.527 13:03:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:06.527 13:03:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121885' 00:18:06.527 13:03:25 -- common/autotest_common.sh@945 -- # kill 121885 00:18:06.527 13:03:25 -- common/autotest_common.sh@950 -- # wait 121885 00:18:06.527 [2024-06-11 13:03:25.358987] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.527 [2024-06-11 13:03:25.359132] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.900 ************************************ 00:18:07.900 END TEST raid_state_function_test_sb 00:18:07.900 ************************************ 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:07.900 00:18:07.900 real 0m15.148s 00:18:07.900 user 0m27.310s 00:18:07.900 sys 0m1.628s 00:18:07.900 13:03:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.900 13:03:26 -- common/autotest_common.sh@10 -- # set +x 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:07.900 13:03:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:07.900 13:03:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:07.900 13:03:26 -- common/autotest_common.sh@10 -- # set +x 00:18:07.900 ************************************ 00:18:07.900 START TEST raid_superblock_test 00:18:07.900 ************************************ 00:18:07.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:07.900 13:03:26 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=122375 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122375 /var/tmp/spdk-raid.sock 00:18:07.900 13:03:26 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:07.900 13:03:26 -- common/autotest_common.sh@819 -- # '[' -z 122375 ']' 00:18:07.900 13:03:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:07.900 13:03:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:07.900 13:03:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:07.900 13:03:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:07.900 13:03:26 -- common/autotest_common.sh@10 -- # set +x 00:18:07.900 [2024-06-11 13:03:26.479429] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:07.900 [2024-06-11 13:03:26.479886] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122375 ] 00:18:07.900 [2024-06-11 13:03:26.624679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.158 [2024-06-11 13:03:26.793139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.158 [2024-06-11 13:03:26.956761] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.726 13:03:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:08.726 13:03:27 -- common/autotest_common.sh@852 -- # return 0 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.726 13:03:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:08.997 malloc1 00:18:08.997 13:03:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.268 [2024-06-11 13:03:27.831920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.268 [2024-06-11 13:03:27.832158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.268 [2024-06-11 13:03:27.832298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:09.268 [2024-06-11 13:03:27.832434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.268 [2024-06-11 13:03:27.834725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.268 [2024-06-11 13:03:27.834882] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.268 pt1 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:09.268 13:03:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:09.268 malloc2 00:18:09.268 13:03:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.526 [2024-06-11 13:03:28.303346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.526 [2024-06-11 13:03:28.303607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.526 [2024-06-11 13:03:28.303687] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:09.526 [2024-06-11 13:03:28.303934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.526 [2024-06-11 13:03:28.306176] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.526 [2024-06-11 13:03:28.306366] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.526 pt2 00:18:09.526 13:03:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:09.526 13:03:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:09.526 13:03:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:09.526 13:03:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:09.526 13:03:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:09.526 13:03:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:09.526 13:03:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:09.526 13:03:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:09.527 13:03:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:09.784 malloc3 00:18:09.784 13:03:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:10.042 [2024-06-11 13:03:28.781709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:10.042 [2024-06-11 13:03:28.781963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.042 [2024-06-11 13:03:28.782128] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:10.042 [2024-06-11 13:03:28.782257] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.042 [2024-06-11 13:03:28.784411] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.042 [2024-06-11 13:03:28.784581] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:10.042 pt3 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.042 13:03:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:10.300 malloc4 00:18:10.300 13:03:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:10.559 [2024-06-11 13:03:29.187235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:10.559 [2024-06-11 13:03:29.187448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.559 [2024-06-11 13:03:29.187606] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:10.559 [2024-06-11 13:03:29.187757] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.559 [2024-06-11 13:03:29.189805] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.559 [2024-06-11 13:03:29.190005] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:10.559 pt4 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:10.559 [2024-06-11 13:03:29.375312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:10.559 [2024-06-11 13:03:29.377000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.559 [2024-06-11 13:03:29.377196] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:10.559 [2024-06-11 13:03:29.377378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:10.559 [2024-06-11 13:03:29.377773] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:10.559 [2024-06-11 13:03:29.377959] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:10.559 [2024-06-11 13:03:29.378126] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:10.559 [2024-06-11 13:03:29.378475] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:10.559 [2024-06-11 13:03:29.378577] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:10.559 [2024-06-11 13:03:29.378775] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.559 13:03:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.818 13:03:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.818 "name": "raid_bdev1", 00:18:10.818 "uuid": "9b4d0f55-61f5-4b08-b5a5-bf2c18e26403", 00:18:10.818 "strip_size_kb": 64, 00:18:10.818 "state": "online", 00:18:10.818 "raid_level": "raid0", 00:18:10.818 "superblock": true, 00:18:10.818 "num_base_bdevs": 4, 00:18:10.818 "num_base_bdevs_discovered": 4, 00:18:10.818 "num_base_bdevs_operational": 4, 00:18:10.818 "base_bdevs_list": [ 00:18:10.818 { 00:18:10.818 "name": "pt1", 00:18:10.818 "uuid": "66dee874-7c8a-503e-ac06-81e40570f38b", 00:18:10.818 "is_configured": true, 00:18:10.818 "data_offset": 2048, 00:18:10.818 "data_size": 63488 00:18:10.818 }, 00:18:10.818 { 00:18:10.818 "name": "pt2", 00:18:10.818 "uuid": "b5193483-c827-56d0-9524-f93d5bab55a4", 00:18:10.818 "is_configured": true, 00:18:10.818 "data_offset": 2048, 00:18:10.818 "data_size": 63488 00:18:10.818 }, 00:18:10.818 { 00:18:10.818 "name": "pt3", 00:18:10.818 "uuid": "2f5e293a-e49c-5f13-b7a1-b5c39fa1ba9f", 00:18:10.818 "is_configured": true, 00:18:10.818 "data_offset": 2048, 00:18:10.818 "data_size": 63488 00:18:10.818 }, 00:18:10.818 { 00:18:10.818 "name": "pt4", 00:18:10.818 "uuid": "272e59c4-0762-5cd1-a2a7-377c7a7bb037", 00:18:10.818 "is_configured": true, 00:18:10.818 "data_offset": 2048, 00:18:10.818 "data_size": 63488 00:18:10.818 } 00:18:10.818 ] 00:18:10.818 }' 00:18:10.818 13:03:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.818 13:03:29 -- common/autotest_common.sh@10 -- # set +x 00:18:11.754 13:03:30 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:11.754 13:03:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:11.754 [2024-06-11 13:03:30.487654] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.754 13:03:30 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9b4d0f55-61f5-4b08-b5a5-bf2c18e26403 00:18:11.754 13:03:30 -- bdev/bdev_raid.sh@380 -- # '[' -z 9b4d0f55-61f5-4b08-b5a5-bf2c18e26403 ']' 00:18:11.754 13:03:30 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:12.013 [2024-06-11 13:03:30.731518] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.013 [2024-06-11 13:03:30.731686] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.013 [2024-06-11 13:03:30.731861] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.013 [2024-06-11 13:03:30.732085] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.013 [2024-06-11 13:03:30.732200] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:12.013 13:03:30 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.013 13:03:30 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:12.272 13:03:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:12.272 13:03:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:12.272 13:03:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:12.272 13:03:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:12.530 13:03:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:12.530 13:03:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:12.788 13:03:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:12.788 13:03:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:12.788 13:03:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:12.788 13:03:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:13.047 13:03:31 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:13.047 13:03:31 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:13.305 13:03:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:13.305 13:03:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:13.305 13:03:32 -- common/autotest_common.sh@640 -- # local es=0 00:18:13.305 13:03:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:13.305 13:03:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.305 13:03:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.305 13:03:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.306 13:03:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.306 13:03:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.306 13:03:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.306 13:03:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.306 13:03:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:13.306 13:03:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:13.563 [2024-06-11 13:03:32.223773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:13.563 [2024-06-11 13:03:32.225590] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:13.563 [2024-06-11 13:03:32.225798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:13.563 [2024-06-11 13:03:32.226009] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:13.563 [2024-06-11 13:03:32.226168] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:13.563 [2024-06-11 13:03:32.226366] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:13.563 [2024-06-11 13:03:32.226508] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:13.563 [2024-06-11 13:03:32.226657] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:13.563 [2024-06-11 13:03:32.226774] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.563 [2024-06-11 13:03:32.226875] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:18:13.563 request: 00:18:13.563 { 00:18:13.563 "name": "raid_bdev1", 00:18:13.563 "raid_level": "raid0", 00:18:13.563 "base_bdevs": [ 00:18:13.563 "malloc1", 00:18:13.563 "malloc2", 00:18:13.563 "malloc3", 00:18:13.563 "malloc4" 00:18:13.563 ], 00:18:13.563 "superblock": false, 00:18:13.563 "strip_size_kb": 64, 00:18:13.563 "method": "bdev_raid_create", 00:18:13.563 "req_id": 1 00:18:13.563 } 00:18:13.563 Got JSON-RPC error response 00:18:13.563 response: 00:18:13.563 { 00:18:13.563 "code": -17, 00:18:13.563 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:13.563 } 00:18:13.563 13:03:32 -- common/autotest_common.sh@643 -- # es=1 00:18:13.563 13:03:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:13.563 13:03:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:13.563 13:03:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:13.563 13:03:32 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.563 13:03:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.822 [2024-06-11 13:03:32.603785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.822 [2024-06-11 13:03:32.604064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.822 [2024-06-11 13:03:32.604210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:13.822 [2024-06-11 13:03:32.604321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.822 [2024-06-11 13:03:32.606563] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.822 [2024-06-11 13:03:32.606742] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.822 [2024-06-11 13:03:32.606960] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:13.822 [2024-06-11 13:03:32.607102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.822 pt1 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.822 13:03:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.080 13:03:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.080 "name": "raid_bdev1", 00:18:14.080 "uuid": "9b4d0f55-61f5-4b08-b5a5-bf2c18e26403", 00:18:14.080 "strip_size_kb": 64, 00:18:14.080 "state": "configuring", 00:18:14.080 "raid_level": "raid0", 00:18:14.080 "superblock": true, 00:18:14.080 "num_base_bdevs": 4, 00:18:14.080 "num_base_bdevs_discovered": 1, 00:18:14.080 "num_base_bdevs_operational": 4, 00:18:14.080 "base_bdevs_list": [ 00:18:14.080 { 00:18:14.080 "name": "pt1", 00:18:14.080 "uuid": "66dee874-7c8a-503e-ac06-81e40570f38b", 00:18:14.080 "is_configured": true, 00:18:14.080 "data_offset": 2048, 00:18:14.080 "data_size": 63488 00:18:14.080 }, 00:18:14.080 { 00:18:14.080 "name": null, 00:18:14.080 "uuid": "b5193483-c827-56d0-9524-f93d5bab55a4", 00:18:14.080 "is_configured": false, 00:18:14.080 "data_offset": 2048, 00:18:14.080 "data_size": 63488 00:18:14.080 }, 00:18:14.080 { 00:18:14.080 "name": null, 00:18:14.080 "uuid": "2f5e293a-e49c-5f13-b7a1-b5c39fa1ba9f", 00:18:14.080 "is_configured": false, 00:18:14.080 "data_offset": 2048, 00:18:14.080 "data_size": 63488 00:18:14.080 }, 00:18:14.080 { 00:18:14.080 "name": null, 00:18:14.080 "uuid": "272e59c4-0762-5cd1-a2a7-377c7a7bb037", 00:18:14.080 "is_configured": false, 00:18:14.080 "data_offset": 2048, 00:18:14.080 "data_size": 63488 00:18:14.080 } 00:18:14.080 ] 00:18:14.080 }' 00:18:14.080 13:03:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.080 13:03:32 -- common/autotest_common.sh@10 -- # set +x 00:18:15.017 13:03:33 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:15.017 13:03:33 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.017 [2024-06-11 13:03:33.776123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.017 [2024-06-11 13:03:33.776384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.017 [2024-06-11 13:03:33.776528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:15.017 [2024-06-11 13:03:33.776650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.017 [2024-06-11 13:03:33.777197] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.017 [2024-06-11 13:03:33.777384] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.017 [2024-06-11 13:03:33.777708] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:15.017 [2024-06-11 13:03:33.777873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.017 pt2 00:18:15.017 13:03:33 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:15.276 [2024-06-11 13:03:33.960152] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.276 13:03:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.535 13:03:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.535 "name": "raid_bdev1", 00:18:15.535 "uuid": "9b4d0f55-61f5-4b08-b5a5-bf2c18e26403", 00:18:15.535 "strip_size_kb": 64, 00:18:15.535 "state": "configuring", 00:18:15.535 "raid_level": "raid0", 00:18:15.535 "superblock": true, 00:18:15.535 "num_base_bdevs": 4, 00:18:15.535 "num_base_bdevs_discovered": 1, 00:18:15.535 "num_base_bdevs_operational": 4, 00:18:15.535 "base_bdevs_list": [ 00:18:15.535 { 00:18:15.535 "name": "pt1", 00:18:15.535 "uuid": "66dee874-7c8a-503e-ac06-81e40570f38b", 00:18:15.535 "is_configured": true, 00:18:15.535 "data_offset": 2048, 00:18:15.535 "data_size": 63488 00:18:15.535 }, 00:18:15.535 { 00:18:15.535 "name": null, 00:18:15.535 "uuid": "b5193483-c827-56d0-9524-f93d5bab55a4", 00:18:15.535 "is_configured": false, 00:18:15.535 "data_offset": 2048, 00:18:15.535 "data_size": 63488 00:18:15.535 }, 00:18:15.535 { 00:18:15.535 "name": null, 00:18:15.535 "uuid": "2f5e293a-e49c-5f13-b7a1-b5c39fa1ba9f", 00:18:15.535 "is_configured": false, 00:18:15.535 "data_offset": 2048, 00:18:15.535 "data_size": 63488 00:18:15.535 }, 00:18:15.535 { 00:18:15.535 "name": null, 00:18:15.535 "uuid": "272e59c4-0762-5cd1-a2a7-377c7a7bb037", 00:18:15.536 "is_configured": false, 00:18:15.536 "data_offset": 2048, 00:18:15.536 "data_size": 63488 00:18:15.536 } 00:18:15.536 ] 00:18:15.536 }' 00:18:15.536 13:03:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.536 13:03:34 -- common/autotest_common.sh@10 -- # set +x 00:18:16.103 13:03:34 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:16.103 13:03:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:16.103 13:03:34 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.362 [2024-06-11 13:03:34.980367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.362 [2024-06-11 13:03:34.980611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.362 [2024-06-11 13:03:34.980684] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:16.362 [2024-06-11 13:03:34.980868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.362 [2024-06-11 13:03:34.981387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.362 [2024-06-11 13:03:34.981669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.362 [2024-06-11 13:03:34.981884] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:16.362 [2024-06-11 13:03:34.982037] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.362 pt2 00:18:16.362 13:03:34 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:16.362 13:03:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:16.362 13:03:34 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:16.362 [2024-06-11 13:03:35.164375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:16.362 [2024-06-11 13:03:35.164596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.362 [2024-06-11 13:03:35.164656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:16.362 [2024-06-11 13:03:35.164907] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.362 [2024-06-11 13:03:35.165360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.362 [2024-06-11 13:03:35.165616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:16.362 [2024-06-11 13:03:35.165833] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:16.362 [2024-06-11 13:03:35.165981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:16.362 pt3 00:18:16.362 13:03:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:16.362 13:03:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:16.362 13:03:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:16.624 [2024-06-11 13:03:35.368456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:16.624 [2024-06-11 13:03:35.368715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.624 [2024-06-11 13:03:35.368802] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:16.624 [2024-06-11 13:03:35.369013] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.624 [2024-06-11 13:03:35.369665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.624 [2024-06-11 13:03:35.369922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:16.624 [2024-06-11 13:03:35.370138] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:16.624 [2024-06-11 13:03:35.370284] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:16.624 [2024-06-11 13:03:35.370538] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:16.624 [2024-06-11 13:03:35.370639] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:16.624 [2024-06-11 13:03:35.370777] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:16.624 [2024-06-11 13:03:35.371140] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:16.624 [2024-06-11 13:03:35.371243] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:16.624 [2024-06-11 13:03:35.371459] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.624 pt4 00:18:16.624 13:03:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:16.624 13:03:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:16.624 13:03:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:16.624 13:03:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.625 13:03:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.884 13:03:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.884 "name": "raid_bdev1", 00:18:16.884 "uuid": "9b4d0f55-61f5-4b08-b5a5-bf2c18e26403", 00:18:16.884 "strip_size_kb": 64, 00:18:16.884 "state": "online", 00:18:16.884 "raid_level": "raid0", 00:18:16.884 "superblock": true, 00:18:16.884 "num_base_bdevs": 4, 00:18:16.884 "num_base_bdevs_discovered": 4, 00:18:16.884 "num_base_bdevs_operational": 4, 00:18:16.884 "base_bdevs_list": [ 00:18:16.884 { 00:18:16.884 "name": "pt1", 00:18:16.884 "uuid": "66dee874-7c8a-503e-ac06-81e40570f38b", 00:18:16.884 "is_configured": true, 00:18:16.884 "data_offset": 2048, 00:18:16.884 "data_size": 63488 00:18:16.884 }, 00:18:16.884 { 00:18:16.884 "name": "pt2", 00:18:16.884 "uuid": "b5193483-c827-56d0-9524-f93d5bab55a4", 00:18:16.884 "is_configured": true, 00:18:16.884 "data_offset": 2048, 00:18:16.884 "data_size": 63488 00:18:16.884 }, 00:18:16.884 { 00:18:16.884 "name": "pt3", 00:18:16.884 "uuid": "2f5e293a-e49c-5f13-b7a1-b5c39fa1ba9f", 00:18:16.884 "is_configured": true, 00:18:16.884 "data_offset": 2048, 00:18:16.884 "data_size": 63488 00:18:16.884 }, 00:18:16.884 { 00:18:16.884 "name": "pt4", 00:18:16.884 "uuid": "272e59c4-0762-5cd1-a2a7-377c7a7bb037", 00:18:16.884 "is_configured": true, 00:18:16.884 "data_offset": 2048, 00:18:16.884 "data_size": 63488 00:18:16.884 } 00:18:16.884 ] 00:18:16.884 }' 00:18:16.884 13:03:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.884 13:03:35 -- common/autotest_common.sh@10 -- # set +x 00:18:17.820 13:03:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:17.820 13:03:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:17.820 [2024-06-11 13:03:36.548887] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.820 13:03:36 -- bdev/bdev_raid.sh@430 -- # '[' 9b4d0f55-61f5-4b08-b5a5-bf2c18e26403 '!=' 9b4d0f55-61f5-4b08-b5a5-bf2c18e26403 ']' 00:18:17.820 13:03:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:17.820 13:03:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:17.820 13:03:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:17.820 13:03:36 -- bdev/bdev_raid.sh@511 -- # killprocess 122375 00:18:17.820 13:03:36 -- common/autotest_common.sh@926 -- # '[' -z 122375 ']' 00:18:17.820 13:03:36 -- common/autotest_common.sh@930 -- # kill -0 122375 00:18:17.820 13:03:36 -- common/autotest_common.sh@931 -- # uname 00:18:17.820 13:03:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:17.820 13:03:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122375 00:18:17.820 13:03:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:17.820 13:03:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:17.820 13:03:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122375' 00:18:17.820 killing process with pid 122375 00:18:17.820 13:03:36 -- common/autotest_common.sh@945 -- # kill 122375 00:18:17.820 13:03:36 -- common/autotest_common.sh@950 -- # wait 122375 00:18:17.820 [2024-06-11 13:03:36.578778] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.820 [2024-06-11 13:03:36.578853] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.820 [2024-06-11 13:03:36.578921] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.820 [2024-06-11 13:03:36.579037] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:18.079 [2024-06-11 13:03:36.857758] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.016 ************************************ 00:18:19.016 END TEST raid_superblock_test 00:18:19.016 ************************************ 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:19.016 00:18:19.016 real 0m11.362s 00:18:19.016 user 0m20.079s 00:18:19.016 sys 0m1.231s 00:18:19.016 13:03:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.016 13:03:37 -- common/autotest_common.sh@10 -- # set +x 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:19.016 13:03:37 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:19.016 13:03:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:19.016 13:03:37 -- common/autotest_common.sh@10 -- # set +x 00:18:19.016 ************************************ 00:18:19.016 START TEST raid_state_function_test 00:18:19.016 ************************************ 00:18:19.016 13:03:37 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.016 13:03:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:19.017 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.017 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.017 13:03:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:19.017 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.017 13:03:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.275 Process raid pid: 122722 00:18:19.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=122722 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122722' 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122722 /var/tmp/spdk-raid.sock 00:18:19.275 13:03:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:19.275 13:03:37 -- common/autotest_common.sh@819 -- # '[' -z 122722 ']' 00:18:19.275 13:03:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:19.275 13:03:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:19.275 13:03:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:19.276 13:03:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:19.276 13:03:37 -- common/autotest_common.sh@10 -- # set +x 00:18:19.276 [2024-06-11 13:03:37.918793] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:19.276 [2024-06-11 13:03:37.919185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.276 [2024-06-11 13:03:38.091022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.534 [2024-06-11 13:03:38.307139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.793 [2024-06-11 13:03:38.477914] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.053 13:03:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:20.053 13:03:38 -- common/autotest_common.sh@852 -- # return 0 00:18:20.053 13:03:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:20.312 [2024-06-11 13:03:39.042499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.312 [2024-06-11 13:03:39.042755] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.312 [2024-06-11 13:03:39.042860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.312 [2024-06-11 13:03:39.042924] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.312 [2024-06-11 13:03:39.043016] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.312 [2024-06-11 13:03:39.043089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.312 [2024-06-11 13:03:39.043125] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:20.312 [2024-06-11 13:03:39.043256] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.312 13:03:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.571 13:03:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.571 "name": "Existed_Raid", 00:18:20.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.571 "strip_size_kb": 64, 00:18:20.571 "state": "configuring", 00:18:20.571 "raid_level": "concat", 00:18:20.571 "superblock": false, 00:18:20.571 "num_base_bdevs": 4, 00:18:20.571 "num_base_bdevs_discovered": 0, 00:18:20.571 "num_base_bdevs_operational": 4, 00:18:20.571 "base_bdevs_list": [ 00:18:20.571 { 00:18:20.571 "name": "BaseBdev1", 00:18:20.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.571 "is_configured": false, 00:18:20.571 "data_offset": 0, 00:18:20.571 "data_size": 0 00:18:20.571 }, 00:18:20.571 { 00:18:20.571 "name": "BaseBdev2", 00:18:20.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.571 "is_configured": false, 00:18:20.571 "data_offset": 0, 00:18:20.571 "data_size": 0 00:18:20.571 }, 00:18:20.571 { 00:18:20.571 "name": "BaseBdev3", 00:18:20.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.571 "is_configured": false, 00:18:20.571 "data_offset": 0, 00:18:20.571 "data_size": 0 00:18:20.571 }, 00:18:20.571 { 00:18:20.571 "name": "BaseBdev4", 00:18:20.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.571 "is_configured": false, 00:18:20.571 "data_offset": 0, 00:18:20.571 "data_size": 0 00:18:20.571 } 00:18:20.571 ] 00:18:20.571 }' 00:18:20.571 13:03:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.571 13:03:39 -- common/autotest_common.sh@10 -- # set +x 00:18:21.138 13:03:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:21.397 [2024-06-11 13:03:40.106559] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.397 [2024-06-11 13:03:40.106740] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:21.397 13:03:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:21.655 [2024-06-11 13:03:40.298615] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.655 [2024-06-11 13:03:40.298811] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.655 [2024-06-11 13:03:40.298909] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.655 [2024-06-11 13:03:40.299037] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.655 [2024-06-11 13:03:40.299128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.655 [2024-06-11 13:03:40.299282] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.655 [2024-06-11 13:03:40.299374] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:21.655 [2024-06-11 13:03:40.299447] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:21.655 13:03:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:21.913 [2024-06-11 13:03:40.518100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.913 BaseBdev1 00:18:21.913 13:03:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:21.913 13:03:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:21.913 13:03:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:21.913 13:03:40 -- common/autotest_common.sh@889 -- # local i 00:18:21.913 13:03:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:21.913 13:03:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:21.913 13:03:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:21.913 13:03:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:22.171 [ 00:18:22.171 { 00:18:22.171 "name": "BaseBdev1", 00:18:22.171 "aliases": [ 00:18:22.171 "98ad7339-4c1d-42bc-9dc4-6f0c46ea76a5" 00:18:22.172 ], 00:18:22.172 "product_name": "Malloc disk", 00:18:22.172 "block_size": 512, 00:18:22.172 "num_blocks": 65536, 00:18:22.172 "uuid": "98ad7339-4c1d-42bc-9dc4-6f0c46ea76a5", 00:18:22.172 "assigned_rate_limits": { 00:18:22.172 "rw_ios_per_sec": 0, 00:18:22.172 "rw_mbytes_per_sec": 0, 00:18:22.172 "r_mbytes_per_sec": 0, 00:18:22.172 "w_mbytes_per_sec": 0 00:18:22.172 }, 00:18:22.172 "claimed": true, 00:18:22.172 "claim_type": "exclusive_write", 00:18:22.172 "zoned": false, 00:18:22.172 "supported_io_types": { 00:18:22.172 "read": true, 00:18:22.172 "write": true, 00:18:22.172 "unmap": true, 00:18:22.172 "write_zeroes": true, 00:18:22.172 "flush": true, 00:18:22.172 "reset": true, 00:18:22.172 "compare": false, 00:18:22.172 "compare_and_write": false, 00:18:22.172 "abort": true, 00:18:22.172 "nvme_admin": false, 00:18:22.172 "nvme_io": false 00:18:22.172 }, 00:18:22.172 "memory_domains": [ 00:18:22.172 { 00:18:22.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.172 "dma_device_type": 2 00:18:22.172 } 00:18:22.172 ], 00:18:22.172 "driver_specific": {} 00:18:22.172 } 00:18:22.172 ] 00:18:22.172 13:03:40 -- common/autotest_common.sh@895 -- # return 0 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.172 13:03:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.430 13:03:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.430 "name": "Existed_Raid", 00:18:22.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.430 "strip_size_kb": 64, 00:18:22.430 "state": "configuring", 00:18:22.430 "raid_level": "concat", 00:18:22.430 "superblock": false, 00:18:22.430 "num_base_bdevs": 4, 00:18:22.430 "num_base_bdevs_discovered": 1, 00:18:22.430 "num_base_bdevs_operational": 4, 00:18:22.430 "base_bdevs_list": [ 00:18:22.430 { 00:18:22.430 "name": "BaseBdev1", 00:18:22.430 "uuid": "98ad7339-4c1d-42bc-9dc4-6f0c46ea76a5", 00:18:22.430 "is_configured": true, 00:18:22.430 "data_offset": 0, 00:18:22.430 "data_size": 65536 00:18:22.430 }, 00:18:22.430 { 00:18:22.430 "name": "BaseBdev2", 00:18:22.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.431 "is_configured": false, 00:18:22.431 "data_offset": 0, 00:18:22.431 "data_size": 0 00:18:22.431 }, 00:18:22.431 { 00:18:22.431 "name": "BaseBdev3", 00:18:22.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.431 "is_configured": false, 00:18:22.431 "data_offset": 0, 00:18:22.431 "data_size": 0 00:18:22.431 }, 00:18:22.431 { 00:18:22.431 "name": "BaseBdev4", 00:18:22.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.431 "is_configured": false, 00:18:22.431 "data_offset": 0, 00:18:22.431 "data_size": 0 00:18:22.431 } 00:18:22.431 ] 00:18:22.431 }' 00:18:22.431 13:03:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.431 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:18:22.997 13:03:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:23.256 [2024-06-11 13:03:42.010418] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.257 [2024-06-11 13:03:42.010641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:23.257 13:03:42 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:23.257 13:03:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:23.516 [2024-06-11 13:03:42.254532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.516 [2024-06-11 13:03:42.256404] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.516 [2024-06-11 13:03:42.256621] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.516 [2024-06-11 13:03:42.256749] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.516 [2024-06-11 13:03:42.256810] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.516 [2024-06-11 13:03:42.256980] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:23.516 [2024-06-11 13:03:42.257034] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.516 13:03:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.775 13:03:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.775 "name": "Existed_Raid", 00:18:23.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.775 "strip_size_kb": 64, 00:18:23.775 "state": "configuring", 00:18:23.775 "raid_level": "concat", 00:18:23.775 "superblock": false, 00:18:23.775 "num_base_bdevs": 4, 00:18:23.775 "num_base_bdevs_discovered": 1, 00:18:23.775 "num_base_bdevs_operational": 4, 00:18:23.775 "base_bdevs_list": [ 00:18:23.775 { 00:18:23.775 "name": "BaseBdev1", 00:18:23.775 "uuid": "98ad7339-4c1d-42bc-9dc4-6f0c46ea76a5", 00:18:23.775 "is_configured": true, 00:18:23.775 "data_offset": 0, 00:18:23.775 "data_size": 65536 00:18:23.775 }, 00:18:23.775 { 00:18:23.775 "name": "BaseBdev2", 00:18:23.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.775 "is_configured": false, 00:18:23.775 "data_offset": 0, 00:18:23.775 "data_size": 0 00:18:23.775 }, 00:18:23.775 { 00:18:23.775 "name": "BaseBdev3", 00:18:23.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.775 "is_configured": false, 00:18:23.775 "data_offset": 0, 00:18:23.775 "data_size": 0 00:18:23.775 }, 00:18:23.775 { 00:18:23.775 "name": "BaseBdev4", 00:18:23.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.775 "is_configured": false, 00:18:23.775 "data_offset": 0, 00:18:23.775 "data_size": 0 00:18:23.775 } 00:18:23.775 ] 00:18:23.775 }' 00:18:23.775 13:03:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.775 13:03:42 -- common/autotest_common.sh@10 -- # set +x 00:18:24.343 13:03:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:24.602 [2024-06-11 13:03:43.341969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.602 BaseBdev2 00:18:24.602 13:03:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:24.602 13:03:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:24.602 13:03:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:24.602 13:03:43 -- common/autotest_common.sh@889 -- # local i 00:18:24.602 13:03:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:24.602 13:03:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:24.602 13:03:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.860 13:03:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:25.119 [ 00:18:25.119 { 00:18:25.119 "name": "BaseBdev2", 00:18:25.119 "aliases": [ 00:18:25.119 "e6127568-aef9-4301-af12-7c157591f6f7" 00:18:25.119 ], 00:18:25.119 "product_name": "Malloc disk", 00:18:25.119 "block_size": 512, 00:18:25.119 "num_blocks": 65536, 00:18:25.119 "uuid": "e6127568-aef9-4301-af12-7c157591f6f7", 00:18:25.119 "assigned_rate_limits": { 00:18:25.119 "rw_ios_per_sec": 0, 00:18:25.119 "rw_mbytes_per_sec": 0, 00:18:25.119 "r_mbytes_per_sec": 0, 00:18:25.119 "w_mbytes_per_sec": 0 00:18:25.119 }, 00:18:25.119 "claimed": true, 00:18:25.119 "claim_type": "exclusive_write", 00:18:25.119 "zoned": false, 00:18:25.119 "supported_io_types": { 00:18:25.119 "read": true, 00:18:25.119 "write": true, 00:18:25.119 "unmap": true, 00:18:25.119 "write_zeroes": true, 00:18:25.119 "flush": true, 00:18:25.119 "reset": true, 00:18:25.119 "compare": false, 00:18:25.119 "compare_and_write": false, 00:18:25.119 "abort": true, 00:18:25.119 "nvme_admin": false, 00:18:25.119 "nvme_io": false 00:18:25.119 }, 00:18:25.119 "memory_domains": [ 00:18:25.119 { 00:18:25.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.119 "dma_device_type": 2 00:18:25.119 } 00:18:25.119 ], 00:18:25.119 "driver_specific": {} 00:18:25.119 } 00:18:25.119 ] 00:18:25.119 13:03:43 -- common/autotest_common.sh@895 -- # return 0 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.119 13:03:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.377 13:03:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.377 "name": "Existed_Raid", 00:18:25.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.377 "strip_size_kb": 64, 00:18:25.377 "state": "configuring", 00:18:25.377 "raid_level": "concat", 00:18:25.377 "superblock": false, 00:18:25.377 "num_base_bdevs": 4, 00:18:25.377 "num_base_bdevs_discovered": 2, 00:18:25.377 "num_base_bdevs_operational": 4, 00:18:25.377 "base_bdevs_list": [ 00:18:25.377 { 00:18:25.377 "name": "BaseBdev1", 00:18:25.377 "uuid": "98ad7339-4c1d-42bc-9dc4-6f0c46ea76a5", 00:18:25.377 "is_configured": true, 00:18:25.377 "data_offset": 0, 00:18:25.377 "data_size": 65536 00:18:25.377 }, 00:18:25.377 { 00:18:25.377 "name": "BaseBdev2", 00:18:25.377 "uuid": "e6127568-aef9-4301-af12-7c157591f6f7", 00:18:25.377 "is_configured": true, 00:18:25.377 "data_offset": 0, 00:18:25.378 "data_size": 65536 00:18:25.378 }, 00:18:25.378 { 00:18:25.378 "name": "BaseBdev3", 00:18:25.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.378 "is_configured": false, 00:18:25.378 "data_offset": 0, 00:18:25.378 "data_size": 0 00:18:25.378 }, 00:18:25.378 { 00:18:25.378 "name": "BaseBdev4", 00:18:25.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.378 "is_configured": false, 00:18:25.378 "data_offset": 0, 00:18:25.378 "data_size": 0 00:18:25.378 } 00:18:25.378 ] 00:18:25.378 }' 00:18:25.378 13:03:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.378 13:03:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.945 13:03:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:26.203 [2024-06-11 13:03:44.915552] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.203 BaseBdev3 00:18:26.203 13:03:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:26.203 13:03:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:26.203 13:03:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:26.203 13:03:44 -- common/autotest_common.sh@889 -- # local i 00:18:26.203 13:03:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:26.203 13:03:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:26.203 13:03:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.462 13:03:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:26.720 [ 00:18:26.720 { 00:18:26.720 "name": "BaseBdev3", 00:18:26.720 "aliases": [ 00:18:26.720 "6472ff00-5689-4927-8097-34d7b69efed8" 00:18:26.721 ], 00:18:26.721 "product_name": "Malloc disk", 00:18:26.721 "block_size": 512, 00:18:26.721 "num_blocks": 65536, 00:18:26.721 "uuid": "6472ff00-5689-4927-8097-34d7b69efed8", 00:18:26.721 "assigned_rate_limits": { 00:18:26.721 "rw_ios_per_sec": 0, 00:18:26.721 "rw_mbytes_per_sec": 0, 00:18:26.721 "r_mbytes_per_sec": 0, 00:18:26.721 "w_mbytes_per_sec": 0 00:18:26.721 }, 00:18:26.721 "claimed": true, 00:18:26.721 "claim_type": "exclusive_write", 00:18:26.721 "zoned": false, 00:18:26.721 "supported_io_types": { 00:18:26.721 "read": true, 00:18:26.721 "write": true, 00:18:26.721 "unmap": true, 00:18:26.721 "write_zeroes": true, 00:18:26.721 "flush": true, 00:18:26.721 "reset": true, 00:18:26.721 "compare": false, 00:18:26.721 "compare_and_write": false, 00:18:26.721 "abort": true, 00:18:26.721 "nvme_admin": false, 00:18:26.721 "nvme_io": false 00:18:26.721 }, 00:18:26.721 "memory_domains": [ 00:18:26.721 { 00:18:26.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.721 "dma_device_type": 2 00:18:26.721 } 00:18:26.721 ], 00:18:26.721 "driver_specific": {} 00:18:26.721 } 00:18:26.721 ] 00:18:26.721 13:03:45 -- common/autotest_common.sh@895 -- # return 0 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.721 "name": "Existed_Raid", 00:18:26.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.721 "strip_size_kb": 64, 00:18:26.721 "state": "configuring", 00:18:26.721 "raid_level": "concat", 00:18:26.721 "superblock": false, 00:18:26.721 "num_base_bdevs": 4, 00:18:26.721 "num_base_bdevs_discovered": 3, 00:18:26.721 "num_base_bdevs_operational": 4, 00:18:26.721 "base_bdevs_list": [ 00:18:26.721 { 00:18:26.721 "name": "BaseBdev1", 00:18:26.721 "uuid": "98ad7339-4c1d-42bc-9dc4-6f0c46ea76a5", 00:18:26.721 "is_configured": true, 00:18:26.721 "data_offset": 0, 00:18:26.721 "data_size": 65536 00:18:26.721 }, 00:18:26.721 { 00:18:26.721 "name": "BaseBdev2", 00:18:26.721 "uuid": "e6127568-aef9-4301-af12-7c157591f6f7", 00:18:26.721 "is_configured": true, 00:18:26.721 "data_offset": 0, 00:18:26.721 "data_size": 65536 00:18:26.721 }, 00:18:26.721 { 00:18:26.721 "name": "BaseBdev3", 00:18:26.721 "uuid": "6472ff00-5689-4927-8097-34d7b69efed8", 00:18:26.721 "is_configured": true, 00:18:26.721 "data_offset": 0, 00:18:26.721 "data_size": 65536 00:18:26.721 }, 00:18:26.721 { 00:18:26.721 "name": "BaseBdev4", 00:18:26.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.721 "is_configured": false, 00:18:26.721 "data_offset": 0, 00:18:26.721 "data_size": 0 00:18:26.721 } 00:18:26.721 ] 00:18:26.721 }' 00:18:26.721 13:03:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.721 13:03:45 -- common/autotest_common.sh@10 -- # set +x 00:18:27.655 13:03:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:27.655 [2024-06-11 13:03:46.438700] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:27.655 [2024-06-11 13:03:46.439135] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:27.655 [2024-06-11 13:03:46.439339] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:27.655 [2024-06-11 13:03:46.439772] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:27.655 [2024-06-11 13:03:46.440393] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:27.655 [2024-06-11 13:03:46.440590] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:27.655 [2024-06-11 13:03:46.441081] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.655 BaseBdev4 00:18:27.655 13:03:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:27.655 13:03:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:27.655 13:03:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:27.655 13:03:46 -- common/autotest_common.sh@889 -- # local i 00:18:27.655 13:03:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:27.655 13:03:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:27.655 13:03:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:27.914 13:03:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:28.172 [ 00:18:28.172 { 00:18:28.172 "name": "BaseBdev4", 00:18:28.172 "aliases": [ 00:18:28.172 "e20a310f-56a7-4e5e-9519-f2e02aca4deb" 00:18:28.172 ], 00:18:28.172 "product_name": "Malloc disk", 00:18:28.172 "block_size": 512, 00:18:28.172 "num_blocks": 65536, 00:18:28.172 "uuid": "e20a310f-56a7-4e5e-9519-f2e02aca4deb", 00:18:28.172 "assigned_rate_limits": { 00:18:28.172 "rw_ios_per_sec": 0, 00:18:28.172 "rw_mbytes_per_sec": 0, 00:18:28.172 "r_mbytes_per_sec": 0, 00:18:28.172 "w_mbytes_per_sec": 0 00:18:28.172 }, 00:18:28.172 "claimed": true, 00:18:28.172 "claim_type": "exclusive_write", 00:18:28.172 "zoned": false, 00:18:28.172 "supported_io_types": { 00:18:28.172 "read": true, 00:18:28.172 "write": true, 00:18:28.172 "unmap": true, 00:18:28.172 "write_zeroes": true, 00:18:28.172 "flush": true, 00:18:28.172 "reset": true, 00:18:28.172 "compare": false, 00:18:28.172 "compare_and_write": false, 00:18:28.172 "abort": true, 00:18:28.172 "nvme_admin": false, 00:18:28.172 "nvme_io": false 00:18:28.172 }, 00:18:28.172 "memory_domains": [ 00:18:28.172 { 00:18:28.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.172 "dma_device_type": 2 00:18:28.172 } 00:18:28.172 ], 00:18:28.172 "driver_specific": {} 00:18:28.172 } 00:18:28.172 ] 00:18:28.172 13:03:46 -- common/autotest_common.sh@895 -- # return 0 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.172 13:03:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.431 13:03:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.431 "name": "Existed_Raid", 00:18:28.431 "uuid": "4a3c73af-7053-4c57-8f87-bec8851de297", 00:18:28.431 "strip_size_kb": 64, 00:18:28.431 "state": "online", 00:18:28.431 "raid_level": "concat", 00:18:28.431 "superblock": false, 00:18:28.431 "num_base_bdevs": 4, 00:18:28.431 "num_base_bdevs_discovered": 4, 00:18:28.431 "num_base_bdevs_operational": 4, 00:18:28.431 "base_bdevs_list": [ 00:18:28.431 { 00:18:28.431 "name": "BaseBdev1", 00:18:28.431 "uuid": "98ad7339-4c1d-42bc-9dc4-6f0c46ea76a5", 00:18:28.431 "is_configured": true, 00:18:28.431 "data_offset": 0, 00:18:28.431 "data_size": 65536 00:18:28.431 }, 00:18:28.431 { 00:18:28.431 "name": "BaseBdev2", 00:18:28.431 "uuid": "e6127568-aef9-4301-af12-7c157591f6f7", 00:18:28.431 "is_configured": true, 00:18:28.431 "data_offset": 0, 00:18:28.431 "data_size": 65536 00:18:28.431 }, 00:18:28.431 { 00:18:28.431 "name": "BaseBdev3", 00:18:28.431 "uuid": "6472ff00-5689-4927-8097-34d7b69efed8", 00:18:28.431 "is_configured": true, 00:18:28.431 "data_offset": 0, 00:18:28.431 "data_size": 65536 00:18:28.431 }, 00:18:28.431 { 00:18:28.431 "name": "BaseBdev4", 00:18:28.431 "uuid": "e20a310f-56a7-4e5e-9519-f2e02aca4deb", 00:18:28.431 "is_configured": true, 00:18:28.431 "data_offset": 0, 00:18:28.431 "data_size": 65536 00:18:28.431 } 00:18:28.431 ] 00:18:28.431 }' 00:18:28.431 13:03:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.431 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:18:28.997 13:03:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:29.256 [2024-06-11 13:03:47.899152] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.256 [2024-06-11 13:03:47.899448] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.256 [2024-06-11 13:03:47.899691] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.256 13:03:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.515 13:03:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.515 "name": "Existed_Raid", 00:18:29.515 "uuid": "4a3c73af-7053-4c57-8f87-bec8851de297", 00:18:29.515 "strip_size_kb": 64, 00:18:29.515 "state": "offline", 00:18:29.515 "raid_level": "concat", 00:18:29.515 "superblock": false, 00:18:29.515 "num_base_bdevs": 4, 00:18:29.515 "num_base_bdevs_discovered": 3, 00:18:29.515 "num_base_bdevs_operational": 3, 00:18:29.515 "base_bdevs_list": [ 00:18:29.515 { 00:18:29.515 "name": null, 00:18:29.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.515 "is_configured": false, 00:18:29.515 "data_offset": 0, 00:18:29.515 "data_size": 65536 00:18:29.515 }, 00:18:29.515 { 00:18:29.515 "name": "BaseBdev2", 00:18:29.515 "uuid": "e6127568-aef9-4301-af12-7c157591f6f7", 00:18:29.515 "is_configured": true, 00:18:29.515 "data_offset": 0, 00:18:29.515 "data_size": 65536 00:18:29.515 }, 00:18:29.515 { 00:18:29.515 "name": "BaseBdev3", 00:18:29.515 "uuid": "6472ff00-5689-4927-8097-34d7b69efed8", 00:18:29.515 "is_configured": true, 00:18:29.515 "data_offset": 0, 00:18:29.515 "data_size": 65536 00:18:29.515 }, 00:18:29.515 { 00:18:29.515 "name": "BaseBdev4", 00:18:29.515 "uuid": "e20a310f-56a7-4e5e-9519-f2e02aca4deb", 00:18:29.515 "is_configured": true, 00:18:29.515 "data_offset": 0, 00:18:29.515 "data_size": 65536 00:18:29.515 } 00:18:29.515 ] 00:18:29.515 }' 00:18:29.515 13:03:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.515 13:03:48 -- common/autotest_common.sh@10 -- # set +x 00:18:30.082 13:03:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:30.082 13:03:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:30.082 13:03:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.082 13:03:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:30.340 13:03:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:30.340 13:03:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.340 13:03:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:30.599 [2024-06-11 13:03:49.299262] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:30.599 13:03:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:30.599 13:03:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:30.599 13:03:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.599 13:03:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:30.876 13:03:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:30.876 13:03:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.876 13:03:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:31.144 [2024-06-11 13:03:49.760501] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:31.144 13:03:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:31.144 13:03:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.144 13:03:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.144 13:03:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:31.402 13:03:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:31.402 13:03:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.402 13:03:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:31.660 [2024-06-11 13:03:50.258149] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:31.660 [2024-06-11 13:03:50.258352] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:31.660 13:03:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:31.660 13:03:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.660 13:03:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.660 13:03:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:31.919 13:03:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:31.919 13:03:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:31.919 13:03:50 -- bdev/bdev_raid.sh@287 -- # killprocess 122722 00:18:31.919 13:03:50 -- common/autotest_common.sh@926 -- # '[' -z 122722 ']' 00:18:31.919 13:03:50 -- common/autotest_common.sh@930 -- # kill -0 122722 00:18:31.919 13:03:50 -- common/autotest_common.sh@931 -- # uname 00:18:31.919 13:03:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:31.919 13:03:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122722 00:18:31.919 killing process with pid 122722 00:18:31.919 13:03:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:31.919 13:03:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:31.919 13:03:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122722' 00:18:31.919 13:03:50 -- common/autotest_common.sh@945 -- # kill 122722 00:18:31.919 13:03:50 -- common/autotest_common.sh@950 -- # wait 122722 00:18:31.919 [2024-06-11 13:03:50.561155] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.919 [2024-06-11 13:03:50.561307] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.853 ************************************ 00:18:32.853 END TEST raid_state_function_test 00:18:32.853 ************************************ 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:32.853 00:18:32.853 real 0m13.629s 00:18:32.853 user 0m24.657s 00:18:32.853 sys 0m1.465s 00:18:32.853 13:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.853 13:03:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:32.853 13:03:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:32.853 13:03:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:32.853 13:03:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.853 ************************************ 00:18:32.853 START TEST raid_state_function_test_sb 00:18:32.853 ************************************ 00:18:32.853 13:03:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=123163 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123163' 00:18:32.853 Process raid pid: 123163 00:18:32.853 13:03:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123163 /var/tmp/spdk-raid.sock 00:18:32.853 13:03:51 -- common/autotest_common.sh@819 -- # '[' -z 123163 ']' 00:18:32.853 13:03:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:32.853 13:03:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.853 13:03:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:32.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:32.853 13:03:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.853 13:03:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.853 [2024-06-11 13:03:51.597614] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:32.853 [2024-06-11 13:03:51.598704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.112 [2024-06-11 13:03:51.767653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.370 [2024-06-11 13:03:51.991899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.370 [2024-06-11 13:03:52.162708] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.935 13:03:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.936 13:03:52 -- common/autotest_common.sh@852 -- # return 0 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:33.936 [2024-06-11 13:03:52.760206] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.936 [2024-06-11 13:03:52.760405] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.936 [2024-06-11 13:03:52.760516] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.936 [2024-06-11 13:03:52.760573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.936 [2024-06-11 13:03:52.760659] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:33.936 [2024-06-11 13:03:52.760728] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:33.936 [2024-06-11 13:03:52.760872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:33.936 [2024-06-11 13:03:52.760926] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.936 13:03:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.194 13:03:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.194 "name": "Existed_Raid", 00:18:34.194 "uuid": "8065a24a-9cc8-4c45-bbdb-5b554ecf9706", 00:18:34.194 "strip_size_kb": 64, 00:18:34.194 "state": "configuring", 00:18:34.194 "raid_level": "concat", 00:18:34.194 "superblock": true, 00:18:34.194 "num_base_bdevs": 4, 00:18:34.194 "num_base_bdevs_discovered": 0, 00:18:34.194 "num_base_bdevs_operational": 4, 00:18:34.194 "base_bdevs_list": [ 00:18:34.194 { 00:18:34.194 "name": "BaseBdev1", 00:18:34.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.194 "is_configured": false, 00:18:34.194 "data_offset": 0, 00:18:34.194 "data_size": 0 00:18:34.194 }, 00:18:34.194 { 00:18:34.194 "name": "BaseBdev2", 00:18:34.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.194 "is_configured": false, 00:18:34.195 "data_offset": 0, 00:18:34.195 "data_size": 0 00:18:34.195 }, 00:18:34.195 { 00:18:34.195 "name": "BaseBdev3", 00:18:34.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.195 "is_configured": false, 00:18:34.195 "data_offset": 0, 00:18:34.195 "data_size": 0 00:18:34.195 }, 00:18:34.195 { 00:18:34.195 "name": "BaseBdev4", 00:18:34.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.195 "is_configured": false, 00:18:34.195 "data_offset": 0, 00:18:34.195 "data_size": 0 00:18:34.195 } 00:18:34.195 ] 00:18:34.195 }' 00:18:34.195 13:03:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.195 13:03:52 -- common/autotest_common.sh@10 -- # set +x 00:18:35.129 13:03:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:35.130 [2024-06-11 13:03:53.828258] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.130 [2024-06-11 13:03:53.828406] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:35.130 13:03:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:35.388 [2024-06-11 13:03:54.076403] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:35.388 [2024-06-11 13:03:54.076592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:35.389 [2024-06-11 13:03:54.076707] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.389 [2024-06-11 13:03:54.076774] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.389 [2024-06-11 13:03:54.076859] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:35.389 [2024-06-11 13:03:54.076932] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:35.389 [2024-06-11 13:03:54.077023] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:35.389 [2024-06-11 13:03:54.077080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:35.389 13:03:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:35.647 [2024-06-11 13:03:54.354679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.647 BaseBdev1 00:18:35.647 13:03:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:35.647 13:03:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:35.647 13:03:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:35.647 13:03:54 -- common/autotest_common.sh@889 -- # local i 00:18:35.647 13:03:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:35.647 13:03:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:35.647 13:03:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.906 13:03:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.906 [ 00:18:35.906 { 00:18:35.906 "name": "BaseBdev1", 00:18:35.906 "aliases": [ 00:18:35.906 "b93b27fb-2986-406a-bdfb-ca94e24f6022" 00:18:35.906 ], 00:18:35.906 "product_name": "Malloc disk", 00:18:35.906 "block_size": 512, 00:18:35.906 "num_blocks": 65536, 00:18:35.906 "uuid": "b93b27fb-2986-406a-bdfb-ca94e24f6022", 00:18:35.906 "assigned_rate_limits": { 00:18:35.906 "rw_ios_per_sec": 0, 00:18:35.906 "rw_mbytes_per_sec": 0, 00:18:35.906 "r_mbytes_per_sec": 0, 00:18:35.906 "w_mbytes_per_sec": 0 00:18:35.906 }, 00:18:35.906 "claimed": true, 00:18:35.906 "claim_type": "exclusive_write", 00:18:35.906 "zoned": false, 00:18:35.906 "supported_io_types": { 00:18:35.906 "read": true, 00:18:35.906 "write": true, 00:18:35.906 "unmap": true, 00:18:35.906 "write_zeroes": true, 00:18:35.906 "flush": true, 00:18:35.906 "reset": true, 00:18:35.906 "compare": false, 00:18:35.906 "compare_and_write": false, 00:18:35.906 "abort": true, 00:18:35.906 "nvme_admin": false, 00:18:35.906 "nvme_io": false 00:18:35.906 }, 00:18:35.906 "memory_domains": [ 00:18:35.906 { 00:18:35.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.906 "dma_device_type": 2 00:18:35.906 } 00:18:35.906 ], 00:18:35.906 "driver_specific": {} 00:18:35.906 } 00:18:35.906 ] 00:18:36.165 13:03:54 -- common/autotest_common.sh@895 -- # return 0 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.165 "name": "Existed_Raid", 00:18:36.165 "uuid": "df83f950-d2c1-4b62-9695-90484c3a974d", 00:18:36.165 "strip_size_kb": 64, 00:18:36.165 "state": "configuring", 00:18:36.165 "raid_level": "concat", 00:18:36.165 "superblock": true, 00:18:36.165 "num_base_bdevs": 4, 00:18:36.165 "num_base_bdevs_discovered": 1, 00:18:36.165 "num_base_bdevs_operational": 4, 00:18:36.165 "base_bdevs_list": [ 00:18:36.165 { 00:18:36.165 "name": "BaseBdev1", 00:18:36.165 "uuid": "b93b27fb-2986-406a-bdfb-ca94e24f6022", 00:18:36.165 "is_configured": true, 00:18:36.165 "data_offset": 2048, 00:18:36.165 "data_size": 63488 00:18:36.165 }, 00:18:36.165 { 00:18:36.165 "name": "BaseBdev2", 00:18:36.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.165 "is_configured": false, 00:18:36.165 "data_offset": 0, 00:18:36.165 "data_size": 0 00:18:36.165 }, 00:18:36.165 { 00:18:36.165 "name": "BaseBdev3", 00:18:36.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.165 "is_configured": false, 00:18:36.165 "data_offset": 0, 00:18:36.165 "data_size": 0 00:18:36.165 }, 00:18:36.165 { 00:18:36.165 "name": "BaseBdev4", 00:18:36.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.165 "is_configured": false, 00:18:36.165 "data_offset": 0, 00:18:36.165 "data_size": 0 00:18:36.165 } 00:18:36.165 ] 00:18:36.165 }' 00:18:36.165 13:03:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.165 13:03:54 -- common/autotest_common.sh@10 -- # set +x 00:18:37.102 13:03:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:37.102 [2024-06-11 13:03:55.787035] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:37.102 [2024-06-11 13:03:55.787278] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:37.102 13:03:55 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:37.102 13:03:55 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:37.360 13:03:56 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:37.619 BaseBdev1 00:18:37.619 13:03:56 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:37.619 13:03:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:37.619 13:03:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:37.619 13:03:56 -- common/autotest_common.sh@889 -- # local i 00:18:37.619 13:03:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:37.619 13:03:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:37.619 13:03:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.878 13:03:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:38.136 [ 00:18:38.136 { 00:18:38.136 "name": "BaseBdev1", 00:18:38.136 "aliases": [ 00:18:38.136 "905ebb5c-bbf5-4abd-9472-8d2cd0520e00" 00:18:38.136 ], 00:18:38.136 "product_name": "Malloc disk", 00:18:38.136 "block_size": 512, 00:18:38.136 "num_blocks": 65536, 00:18:38.137 "uuid": "905ebb5c-bbf5-4abd-9472-8d2cd0520e00", 00:18:38.137 "assigned_rate_limits": { 00:18:38.137 "rw_ios_per_sec": 0, 00:18:38.137 "rw_mbytes_per_sec": 0, 00:18:38.137 "r_mbytes_per_sec": 0, 00:18:38.137 "w_mbytes_per_sec": 0 00:18:38.137 }, 00:18:38.137 "claimed": false, 00:18:38.137 "zoned": false, 00:18:38.137 "supported_io_types": { 00:18:38.137 "read": true, 00:18:38.137 "write": true, 00:18:38.137 "unmap": true, 00:18:38.137 "write_zeroes": true, 00:18:38.137 "flush": true, 00:18:38.137 "reset": true, 00:18:38.137 "compare": false, 00:18:38.137 "compare_and_write": false, 00:18:38.137 "abort": true, 00:18:38.137 "nvme_admin": false, 00:18:38.137 "nvme_io": false 00:18:38.137 }, 00:18:38.137 "memory_domains": [ 00:18:38.137 { 00:18:38.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.137 "dma_device_type": 2 00:18:38.137 } 00:18:38.137 ], 00:18:38.137 "driver_specific": {} 00:18:38.137 } 00:18:38.137 ] 00:18:38.137 13:03:56 -- common/autotest_common.sh@895 -- # return 0 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:38.137 [2024-06-11 13:03:56.961897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.137 [2024-06-11 13:03:56.963844] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.137 [2024-06-11 13:03:56.964080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.137 [2024-06-11 13:03:56.964180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:38.137 [2024-06-11 13:03:56.964237] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:38.137 [2024-06-11 13:03:56.964324] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:38.137 [2024-06-11 13:03:56.964466] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.137 13:03:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.396 13:03:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.396 13:03:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.396 13:03:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.396 "name": "Existed_Raid", 00:18:38.396 "uuid": "91bade5d-690d-49df-8d9a-bcccbccba3ce", 00:18:38.396 "strip_size_kb": 64, 00:18:38.396 "state": "configuring", 00:18:38.396 "raid_level": "concat", 00:18:38.396 "superblock": true, 00:18:38.396 "num_base_bdevs": 4, 00:18:38.396 "num_base_bdevs_discovered": 1, 00:18:38.396 "num_base_bdevs_operational": 4, 00:18:38.396 "base_bdevs_list": [ 00:18:38.396 { 00:18:38.396 "name": "BaseBdev1", 00:18:38.396 "uuid": "905ebb5c-bbf5-4abd-9472-8d2cd0520e00", 00:18:38.396 "is_configured": true, 00:18:38.396 "data_offset": 2048, 00:18:38.396 "data_size": 63488 00:18:38.396 }, 00:18:38.396 { 00:18:38.396 "name": "BaseBdev2", 00:18:38.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.396 "is_configured": false, 00:18:38.396 "data_offset": 0, 00:18:38.396 "data_size": 0 00:18:38.396 }, 00:18:38.396 { 00:18:38.396 "name": "BaseBdev3", 00:18:38.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.396 "is_configured": false, 00:18:38.396 "data_offset": 0, 00:18:38.396 "data_size": 0 00:18:38.396 }, 00:18:38.396 { 00:18:38.396 "name": "BaseBdev4", 00:18:38.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.396 "is_configured": false, 00:18:38.396 "data_offset": 0, 00:18:38.396 "data_size": 0 00:18:38.396 } 00:18:38.396 ] 00:18:38.396 }' 00:18:38.396 13:03:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.396 13:03:57 -- common/autotest_common.sh@10 -- # set +x 00:18:39.332 13:03:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:39.332 [2024-06-11 13:03:58.135820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.332 BaseBdev2 00:18:39.332 13:03:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:39.332 13:03:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:39.332 13:03:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:39.332 13:03:58 -- common/autotest_common.sh@889 -- # local i 00:18:39.332 13:03:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:39.332 13:03:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:39.332 13:03:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.591 13:03:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:39.849 [ 00:18:39.849 { 00:18:39.849 "name": "BaseBdev2", 00:18:39.849 "aliases": [ 00:18:39.849 "dc40a6ab-c25d-489b-8f43-4fdab6a7dc94" 00:18:39.849 ], 00:18:39.849 "product_name": "Malloc disk", 00:18:39.849 "block_size": 512, 00:18:39.849 "num_blocks": 65536, 00:18:39.849 "uuid": "dc40a6ab-c25d-489b-8f43-4fdab6a7dc94", 00:18:39.849 "assigned_rate_limits": { 00:18:39.849 "rw_ios_per_sec": 0, 00:18:39.849 "rw_mbytes_per_sec": 0, 00:18:39.849 "r_mbytes_per_sec": 0, 00:18:39.849 "w_mbytes_per_sec": 0 00:18:39.849 }, 00:18:39.849 "claimed": true, 00:18:39.849 "claim_type": "exclusive_write", 00:18:39.849 "zoned": false, 00:18:39.849 "supported_io_types": { 00:18:39.849 "read": true, 00:18:39.849 "write": true, 00:18:39.849 "unmap": true, 00:18:39.849 "write_zeroes": true, 00:18:39.849 "flush": true, 00:18:39.849 "reset": true, 00:18:39.849 "compare": false, 00:18:39.849 "compare_and_write": false, 00:18:39.849 "abort": true, 00:18:39.849 "nvme_admin": false, 00:18:39.850 "nvme_io": false 00:18:39.850 }, 00:18:39.850 "memory_domains": [ 00:18:39.850 { 00:18:39.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.850 "dma_device_type": 2 00:18:39.850 } 00:18:39.850 ], 00:18:39.850 "driver_specific": {} 00:18:39.850 } 00:18:39.850 ] 00:18:39.850 13:03:58 -- common/autotest_common.sh@895 -- # return 0 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.850 13:03:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.108 13:03:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.108 "name": "Existed_Raid", 00:18:40.108 "uuid": "91bade5d-690d-49df-8d9a-bcccbccba3ce", 00:18:40.108 "strip_size_kb": 64, 00:18:40.108 "state": "configuring", 00:18:40.108 "raid_level": "concat", 00:18:40.108 "superblock": true, 00:18:40.108 "num_base_bdevs": 4, 00:18:40.108 "num_base_bdevs_discovered": 2, 00:18:40.108 "num_base_bdevs_operational": 4, 00:18:40.108 "base_bdevs_list": [ 00:18:40.108 { 00:18:40.108 "name": "BaseBdev1", 00:18:40.108 "uuid": "905ebb5c-bbf5-4abd-9472-8d2cd0520e00", 00:18:40.108 "is_configured": true, 00:18:40.108 "data_offset": 2048, 00:18:40.108 "data_size": 63488 00:18:40.108 }, 00:18:40.108 { 00:18:40.108 "name": "BaseBdev2", 00:18:40.108 "uuid": "dc40a6ab-c25d-489b-8f43-4fdab6a7dc94", 00:18:40.108 "is_configured": true, 00:18:40.108 "data_offset": 2048, 00:18:40.108 "data_size": 63488 00:18:40.108 }, 00:18:40.108 { 00:18:40.108 "name": "BaseBdev3", 00:18:40.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.109 "is_configured": false, 00:18:40.109 "data_offset": 0, 00:18:40.109 "data_size": 0 00:18:40.109 }, 00:18:40.109 { 00:18:40.109 "name": "BaseBdev4", 00:18:40.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.109 "is_configured": false, 00:18:40.109 "data_offset": 0, 00:18:40.109 "data_size": 0 00:18:40.109 } 00:18:40.109 ] 00:18:40.109 }' 00:18:40.109 13:03:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.109 13:03:58 -- common/autotest_common.sh@10 -- # set +x 00:18:40.676 13:03:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:40.935 [2024-06-11 13:03:59.620947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:40.935 BaseBdev3 00:18:40.935 13:03:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:40.935 13:03:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:40.935 13:03:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:40.935 13:03:59 -- common/autotest_common.sh@889 -- # local i 00:18:40.935 13:03:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:40.935 13:03:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:40.935 13:03:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:41.193 13:03:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:41.451 [ 00:18:41.451 { 00:18:41.451 "name": "BaseBdev3", 00:18:41.451 "aliases": [ 00:18:41.451 "e1fe18d9-d8cc-4d39-af13-e0cdfc8215f3" 00:18:41.451 ], 00:18:41.451 "product_name": "Malloc disk", 00:18:41.451 "block_size": 512, 00:18:41.451 "num_blocks": 65536, 00:18:41.451 "uuid": "e1fe18d9-d8cc-4d39-af13-e0cdfc8215f3", 00:18:41.451 "assigned_rate_limits": { 00:18:41.451 "rw_ios_per_sec": 0, 00:18:41.451 "rw_mbytes_per_sec": 0, 00:18:41.451 "r_mbytes_per_sec": 0, 00:18:41.451 "w_mbytes_per_sec": 0 00:18:41.451 }, 00:18:41.451 "claimed": true, 00:18:41.451 "claim_type": "exclusive_write", 00:18:41.451 "zoned": false, 00:18:41.451 "supported_io_types": { 00:18:41.451 "read": true, 00:18:41.451 "write": true, 00:18:41.451 "unmap": true, 00:18:41.451 "write_zeroes": true, 00:18:41.451 "flush": true, 00:18:41.451 "reset": true, 00:18:41.451 "compare": false, 00:18:41.451 "compare_and_write": false, 00:18:41.451 "abort": true, 00:18:41.451 "nvme_admin": false, 00:18:41.451 "nvme_io": false 00:18:41.451 }, 00:18:41.451 "memory_domains": [ 00:18:41.451 { 00:18:41.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.451 "dma_device_type": 2 00:18:41.451 } 00:18:41.451 ], 00:18:41.451 "driver_specific": {} 00:18:41.451 } 00:18:41.451 ] 00:18:41.451 13:04:00 -- common/autotest_common.sh@895 -- # return 0 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.451 13:04:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.452 13:04:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.452 13:04:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.452 13:04:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.452 13:04:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.452 13:04:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.710 13:04:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.710 "name": "Existed_Raid", 00:18:41.710 "uuid": "91bade5d-690d-49df-8d9a-bcccbccba3ce", 00:18:41.710 "strip_size_kb": 64, 00:18:41.710 "state": "configuring", 00:18:41.710 "raid_level": "concat", 00:18:41.710 "superblock": true, 00:18:41.710 "num_base_bdevs": 4, 00:18:41.710 "num_base_bdevs_discovered": 3, 00:18:41.710 "num_base_bdevs_operational": 4, 00:18:41.710 "base_bdevs_list": [ 00:18:41.710 { 00:18:41.710 "name": "BaseBdev1", 00:18:41.710 "uuid": "905ebb5c-bbf5-4abd-9472-8d2cd0520e00", 00:18:41.710 "is_configured": true, 00:18:41.710 "data_offset": 2048, 00:18:41.710 "data_size": 63488 00:18:41.710 }, 00:18:41.710 { 00:18:41.710 "name": "BaseBdev2", 00:18:41.710 "uuid": "dc40a6ab-c25d-489b-8f43-4fdab6a7dc94", 00:18:41.710 "is_configured": true, 00:18:41.710 "data_offset": 2048, 00:18:41.710 "data_size": 63488 00:18:41.710 }, 00:18:41.710 { 00:18:41.710 "name": "BaseBdev3", 00:18:41.710 "uuid": "e1fe18d9-d8cc-4d39-af13-e0cdfc8215f3", 00:18:41.710 "is_configured": true, 00:18:41.710 "data_offset": 2048, 00:18:41.710 "data_size": 63488 00:18:41.710 }, 00:18:41.710 { 00:18:41.710 "name": "BaseBdev4", 00:18:41.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.710 "is_configured": false, 00:18:41.710 "data_offset": 0, 00:18:41.710 "data_size": 0 00:18:41.710 } 00:18:41.710 ] 00:18:41.710 }' 00:18:41.710 13:04:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.710 13:04:00 -- common/autotest_common.sh@10 -- # set +x 00:18:42.278 13:04:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:42.537 [2024-06-11 13:04:01.250911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:42.537 [2024-06-11 13:04:01.251408] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:42.537 [2024-06-11 13:04:01.251531] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:42.537 BaseBdev4 00:18:42.537 [2024-06-11 13:04:01.251699] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:42.537 [2024-06-11 13:04:01.252043] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:42.537 [2024-06-11 13:04:01.252192] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:42.537 [2024-06-11 13:04:01.252433] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.537 13:04:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:42.537 13:04:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:42.537 13:04:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:42.537 13:04:01 -- common/autotest_common.sh@889 -- # local i 00:18:42.537 13:04:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:42.537 13:04:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:42.537 13:04:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.796 13:04:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:43.053 [ 00:18:43.053 { 00:18:43.053 "name": "BaseBdev4", 00:18:43.053 "aliases": [ 00:18:43.053 "a10bef46-0150-42bc-89be-cdcc2405117a" 00:18:43.053 ], 00:18:43.053 "product_name": "Malloc disk", 00:18:43.053 "block_size": 512, 00:18:43.053 "num_blocks": 65536, 00:18:43.053 "uuid": "a10bef46-0150-42bc-89be-cdcc2405117a", 00:18:43.053 "assigned_rate_limits": { 00:18:43.053 "rw_ios_per_sec": 0, 00:18:43.053 "rw_mbytes_per_sec": 0, 00:18:43.053 "r_mbytes_per_sec": 0, 00:18:43.053 "w_mbytes_per_sec": 0 00:18:43.053 }, 00:18:43.053 "claimed": true, 00:18:43.053 "claim_type": "exclusive_write", 00:18:43.053 "zoned": false, 00:18:43.053 "supported_io_types": { 00:18:43.053 "read": true, 00:18:43.053 "write": true, 00:18:43.054 "unmap": true, 00:18:43.054 "write_zeroes": true, 00:18:43.054 "flush": true, 00:18:43.054 "reset": true, 00:18:43.054 "compare": false, 00:18:43.054 "compare_and_write": false, 00:18:43.054 "abort": true, 00:18:43.054 "nvme_admin": false, 00:18:43.054 "nvme_io": false 00:18:43.054 }, 00:18:43.054 "memory_domains": [ 00:18:43.054 { 00:18:43.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.054 "dma_device_type": 2 00:18:43.054 } 00:18:43.054 ], 00:18:43.054 "driver_specific": {} 00:18:43.054 } 00:18:43.054 ] 00:18:43.054 13:04:01 -- common/autotest_common.sh@895 -- # return 0 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.054 13:04:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.312 13:04:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.312 "name": "Existed_Raid", 00:18:43.312 "uuid": "91bade5d-690d-49df-8d9a-bcccbccba3ce", 00:18:43.312 "strip_size_kb": 64, 00:18:43.312 "state": "online", 00:18:43.312 "raid_level": "concat", 00:18:43.312 "superblock": true, 00:18:43.312 "num_base_bdevs": 4, 00:18:43.312 "num_base_bdevs_discovered": 4, 00:18:43.312 "num_base_bdevs_operational": 4, 00:18:43.312 "base_bdevs_list": [ 00:18:43.312 { 00:18:43.312 "name": "BaseBdev1", 00:18:43.312 "uuid": "905ebb5c-bbf5-4abd-9472-8d2cd0520e00", 00:18:43.312 "is_configured": true, 00:18:43.312 "data_offset": 2048, 00:18:43.312 "data_size": 63488 00:18:43.312 }, 00:18:43.312 { 00:18:43.312 "name": "BaseBdev2", 00:18:43.312 "uuid": "dc40a6ab-c25d-489b-8f43-4fdab6a7dc94", 00:18:43.312 "is_configured": true, 00:18:43.312 "data_offset": 2048, 00:18:43.312 "data_size": 63488 00:18:43.312 }, 00:18:43.312 { 00:18:43.312 "name": "BaseBdev3", 00:18:43.312 "uuid": "e1fe18d9-d8cc-4d39-af13-e0cdfc8215f3", 00:18:43.312 "is_configured": true, 00:18:43.312 "data_offset": 2048, 00:18:43.312 "data_size": 63488 00:18:43.312 }, 00:18:43.312 { 00:18:43.312 "name": "BaseBdev4", 00:18:43.312 "uuid": "a10bef46-0150-42bc-89be-cdcc2405117a", 00:18:43.312 "is_configured": true, 00:18:43.312 "data_offset": 2048, 00:18:43.312 "data_size": 63488 00:18:43.312 } 00:18:43.312 ] 00:18:43.312 }' 00:18:43.312 13:04:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.312 13:04:01 -- common/autotest_common.sh@10 -- # set +x 00:18:43.878 13:04:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:44.137 [2024-06-11 13:04:02.753937] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:44.137 [2024-06-11 13:04:02.754135] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.137 [2024-06-11 13:04:02.754306] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.137 13:04:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.395 13:04:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:44.395 "name": "Existed_Raid", 00:18:44.395 "uuid": "91bade5d-690d-49df-8d9a-bcccbccba3ce", 00:18:44.395 "strip_size_kb": 64, 00:18:44.395 "state": "offline", 00:18:44.395 "raid_level": "concat", 00:18:44.395 "superblock": true, 00:18:44.395 "num_base_bdevs": 4, 00:18:44.395 "num_base_bdevs_discovered": 3, 00:18:44.395 "num_base_bdevs_operational": 3, 00:18:44.395 "base_bdevs_list": [ 00:18:44.395 { 00:18:44.395 "name": null, 00:18:44.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.395 "is_configured": false, 00:18:44.395 "data_offset": 2048, 00:18:44.395 "data_size": 63488 00:18:44.395 }, 00:18:44.395 { 00:18:44.395 "name": "BaseBdev2", 00:18:44.395 "uuid": "dc40a6ab-c25d-489b-8f43-4fdab6a7dc94", 00:18:44.395 "is_configured": true, 00:18:44.395 "data_offset": 2048, 00:18:44.395 "data_size": 63488 00:18:44.395 }, 00:18:44.395 { 00:18:44.395 "name": "BaseBdev3", 00:18:44.395 "uuid": "e1fe18d9-d8cc-4d39-af13-e0cdfc8215f3", 00:18:44.395 "is_configured": true, 00:18:44.395 "data_offset": 2048, 00:18:44.395 "data_size": 63488 00:18:44.395 }, 00:18:44.395 { 00:18:44.395 "name": "BaseBdev4", 00:18:44.395 "uuid": "a10bef46-0150-42bc-89be-cdcc2405117a", 00:18:44.395 "is_configured": true, 00:18:44.395 "data_offset": 2048, 00:18:44.395 "data_size": 63488 00:18:44.395 } 00:18:44.395 ] 00:18:44.395 }' 00:18:44.395 13:04:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:44.395 13:04:03 -- common/autotest_common.sh@10 -- # set +x 00:18:44.963 13:04:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:44.963 13:04:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:44.963 13:04:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.963 13:04:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:45.222 13:04:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:45.222 13:04:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.222 13:04:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:45.481 [2024-06-11 13:04:04.277465] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:45.739 13:04:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:45.739 13:04:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:45.739 13:04:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.739 13:04:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:45.739 13:04:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:45.739 13:04:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.739 13:04:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:45.998 [2024-06-11 13:04:04.779765] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:46.256 13:04:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:46.256 13:04:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:46.256 13:04:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.256 13:04:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:46.256 13:04:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:46.256 13:04:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:46.256 13:04:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:46.514 [2024-06-11 13:04:05.278547] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:46.515 [2024-06-11 13:04:05.278782] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:46.774 13:04:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:46.774 13:04:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:46.774 13:04:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.774 13:04:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:46.774 13:04:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:46.774 13:04:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:46.774 13:04:05 -- bdev/bdev_raid.sh@287 -- # killprocess 123163 00:18:46.774 13:04:05 -- common/autotest_common.sh@926 -- # '[' -z 123163 ']' 00:18:46.774 13:04:05 -- common/autotest_common.sh@930 -- # kill -0 123163 00:18:46.774 13:04:05 -- common/autotest_common.sh@931 -- # uname 00:18:46.774 13:04:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:46.774 13:04:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123163 00:18:46.774 killing process with pid 123163 00:18:46.774 13:04:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:46.774 13:04:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:46.774 13:04:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123163' 00:18:46.774 13:04:05 -- common/autotest_common.sh@945 -- # kill 123163 00:18:46.774 13:04:05 -- common/autotest_common.sh@950 -- # wait 123163 00:18:46.774 [2024-06-11 13:04:05.578873] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.774 [2024-06-11 13:04:05.579017] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:47.710 ************************************ 00:18:47.710 END TEST raid_state_function_test_sb 00:18:47.710 ************************************ 00:18:47.710 13:04:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:47.710 00:18:47.710 real 0m14.984s 00:18:47.710 user 0m26.911s 00:18:47.710 sys 0m1.768s 00:18:47.710 13:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.710 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:47.970 13:04:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:47.970 13:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:47.970 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:18:47.970 ************************************ 00:18:47.970 START TEST raid_superblock_test 00:18:47.970 ************************************ 00:18:47.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:47.970 13:04:06 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@357 -- # raid_pid=123658 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123658 /var/tmp/spdk-raid.sock 00:18:47.970 13:04:06 -- common/autotest_common.sh@819 -- # '[' -z 123658 ']' 00:18:47.970 13:04:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:47.970 13:04:06 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:47.970 13:04:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:47.970 13:04:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:47.970 13:04:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:47.970 13:04:06 -- common/autotest_common.sh@10 -- # set +x 00:18:47.970 [2024-06-11 13:04:06.616807] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:47.970 [2024-06-11 13:04:06.617189] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123658 ] 00:18:47.970 [2024-06-11 13:04:06.769044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.229 [2024-06-11 13:04:06.964358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.488 [2024-06-11 13:04:07.130548] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.746 13:04:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:48.746 13:04:07 -- common/autotest_common.sh@852 -- # return 0 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:48.746 13:04:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:49.005 malloc1 00:18:49.005 13:04:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:49.264 [2024-06-11 13:04:07.923552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:49.264 [2024-06-11 13:04:07.923842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.264 [2024-06-11 13:04:07.924027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:49.264 [2024-06-11 13:04:07.924203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.264 [2024-06-11 13:04:07.926807] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.264 [2024-06-11 13:04:07.927000] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:49.264 pt1 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:49.264 13:04:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:49.522 malloc2 00:18:49.522 13:04:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:49.781 [2024-06-11 13:04:08.377953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:49.781 [2024-06-11 13:04:08.378158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.781 [2024-06-11 13:04:08.378231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:49.781 [2024-06-11 13:04:08.378466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.781 [2024-06-11 13:04:08.380384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.781 [2024-06-11 13:04:08.380550] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:49.781 pt2 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:49.781 malloc3 00:18:49.781 13:04:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:50.040 [2024-06-11 13:04:08.779525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:50.040 [2024-06-11 13:04:08.779723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.040 [2024-06-11 13:04:08.779791] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:50.040 [2024-06-11 13:04:08.779962] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.040 [2024-06-11 13:04:08.782027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.040 [2024-06-11 13:04:08.782197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:50.040 pt3 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:50.040 13:04:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:50.299 malloc4 00:18:50.299 13:04:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:50.557 [2024-06-11 13:04:09.188016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:50.557 [2024-06-11 13:04:09.188231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.557 [2024-06-11 13:04:09.188305] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:50.557 [2024-06-11 13:04:09.188541] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.557 [2024-06-11 13:04:09.190552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.557 [2024-06-11 13:04:09.190727] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:50.557 pt4 00:18:50.557 13:04:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:50.557 13:04:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:50.557 13:04:09 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:50.557 [2024-06-11 13:04:09.384167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:50.557 [2024-06-11 13:04:09.386390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.557 [2024-06-11 13:04:09.386597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:50.557 [2024-06-11 13:04:09.386728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:50.557 [2024-06-11 13:04:09.387019] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:50.557 [2024-06-11 13:04:09.387112] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:50.557 [2024-06-11 13:04:09.387290] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:50.557 [2024-06-11 13:04:09.387755] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:50.557 [2024-06-11 13:04:09.387873] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:50.557 [2024-06-11 13:04:09.388165] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.816 13:04:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.075 13:04:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.075 "name": "raid_bdev1", 00:18:51.075 "uuid": "38766aaa-4c40-4d09-8b6c-2f58d314236c", 00:18:51.075 "strip_size_kb": 64, 00:18:51.075 "state": "online", 00:18:51.075 "raid_level": "concat", 00:18:51.075 "superblock": true, 00:18:51.075 "num_base_bdevs": 4, 00:18:51.075 "num_base_bdevs_discovered": 4, 00:18:51.075 "num_base_bdevs_operational": 4, 00:18:51.075 "base_bdevs_list": [ 00:18:51.075 { 00:18:51.075 "name": "pt1", 00:18:51.075 "uuid": "807a1ec2-9aa2-5b05-a96a-fe33142f6170", 00:18:51.075 "is_configured": true, 00:18:51.075 "data_offset": 2048, 00:18:51.075 "data_size": 63488 00:18:51.075 }, 00:18:51.075 { 00:18:51.075 "name": "pt2", 00:18:51.075 "uuid": "af385ac8-e746-5ef5-99e5-762757ef7e7b", 00:18:51.075 "is_configured": true, 00:18:51.075 "data_offset": 2048, 00:18:51.075 "data_size": 63488 00:18:51.075 }, 00:18:51.075 { 00:18:51.075 "name": "pt3", 00:18:51.075 "uuid": "3efbd089-9e7f-5410-b1fa-749aa3ac1f0b", 00:18:51.075 "is_configured": true, 00:18:51.075 "data_offset": 2048, 00:18:51.075 "data_size": 63488 00:18:51.075 }, 00:18:51.075 { 00:18:51.075 "name": "pt4", 00:18:51.075 "uuid": "1b231d02-dea3-51dc-8f8b-13bae10f8972", 00:18:51.075 "is_configured": true, 00:18:51.075 "data_offset": 2048, 00:18:51.075 "data_size": 63488 00:18:51.075 } 00:18:51.075 ] 00:18:51.075 }' 00:18:51.075 13:04:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.075 13:04:09 -- common/autotest_common.sh@10 -- # set +x 00:18:51.641 13:04:10 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:51.641 13:04:10 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:51.900 [2024-06-11 13:04:10.520544] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.900 13:04:10 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=38766aaa-4c40-4d09-8b6c-2f58d314236c 00:18:51.900 13:04:10 -- bdev/bdev_raid.sh@380 -- # '[' -z 38766aaa-4c40-4d09-8b6c-2f58d314236c ']' 00:18:51.900 13:04:10 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:52.159 [2024-06-11 13:04:10.760338] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.159 [2024-06-11 13:04:10.760517] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.159 [2024-06-11 13:04:10.760678] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.159 [2024-06-11 13:04:10.760869] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.159 [2024-06-11 13:04:10.760973] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:52.159 13:04:10 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.159 13:04:10 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:52.418 13:04:11 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:52.418 13:04:11 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:52.418 13:04:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.418 13:04:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:52.418 13:04:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.418 13:04:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:52.686 13:04:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.686 13:04:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:52.960 13:04:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.960 13:04:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:53.219 13:04:11 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:53.219 13:04:11 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:53.477 13:04:12 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:53.477 13:04:12 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:53.477 13:04:12 -- common/autotest_common.sh@640 -- # local es=0 00:18:53.477 13:04:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:53.477 13:04:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.477 13:04:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:53.477 13:04:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.477 13:04:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:53.477 13:04:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.477 13:04:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:53.477 13:04:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.477 13:04:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:53.477 13:04:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:53.477 [2024-06-11 13:04:12.268566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:53.477 [2024-06-11 13:04:12.270822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:53.477 [2024-06-11 13:04:12.271063] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:53.477 [2024-06-11 13:04:12.271248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:53.477 [2024-06-11 13:04:12.271420] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:53.477 [2024-06-11 13:04:12.271612] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:53.477 [2024-06-11 13:04:12.271753] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:53.477 [2024-06-11 13:04:12.271905] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:53.477 [2024-06-11 13:04:12.272019] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.477 [2024-06-11 13:04:12.272107] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:18:53.477 request: 00:18:53.477 { 00:18:53.477 "name": "raid_bdev1", 00:18:53.477 "raid_level": "concat", 00:18:53.477 "base_bdevs": [ 00:18:53.477 "malloc1", 00:18:53.477 "malloc2", 00:18:53.477 "malloc3", 00:18:53.477 "malloc4" 00:18:53.477 ], 00:18:53.477 "superblock": false, 00:18:53.477 "strip_size_kb": 64, 00:18:53.477 "method": "bdev_raid_create", 00:18:53.477 "req_id": 1 00:18:53.477 } 00:18:53.477 Got JSON-RPC error response 00:18:53.477 response: 00:18:53.477 { 00:18:53.477 "code": -17, 00:18:53.477 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:53.477 } 00:18:53.477 13:04:12 -- common/autotest_common.sh@643 -- # es=1 00:18:53.477 13:04:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:53.477 13:04:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:53.477 13:04:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:53.477 13:04:12 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.477 13:04:12 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:53.735 13:04:12 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:53.735 13:04:12 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:53.735 13:04:12 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:53.994 [2024-06-11 13:04:12.672698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:53.994 [2024-06-11 13:04:12.672908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.994 [2024-06-11 13:04:12.672972] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:53.994 [2024-06-11 13:04:12.673128] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.994 [2024-06-11 13:04:12.675177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.994 [2024-06-11 13:04:12.675357] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:53.994 [2024-06-11 13:04:12.675576] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:53.994 [2024-06-11 13:04:12.675714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:53.994 pt1 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.994 13:04:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.252 13:04:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.252 "name": "raid_bdev1", 00:18:54.252 "uuid": "38766aaa-4c40-4d09-8b6c-2f58d314236c", 00:18:54.252 "strip_size_kb": 64, 00:18:54.252 "state": "configuring", 00:18:54.252 "raid_level": "concat", 00:18:54.252 "superblock": true, 00:18:54.252 "num_base_bdevs": 4, 00:18:54.252 "num_base_bdevs_discovered": 1, 00:18:54.252 "num_base_bdevs_operational": 4, 00:18:54.252 "base_bdevs_list": [ 00:18:54.252 { 00:18:54.252 "name": "pt1", 00:18:54.252 "uuid": "807a1ec2-9aa2-5b05-a96a-fe33142f6170", 00:18:54.252 "is_configured": true, 00:18:54.252 "data_offset": 2048, 00:18:54.252 "data_size": 63488 00:18:54.252 }, 00:18:54.252 { 00:18:54.252 "name": null, 00:18:54.252 "uuid": "af385ac8-e746-5ef5-99e5-762757ef7e7b", 00:18:54.252 "is_configured": false, 00:18:54.252 "data_offset": 2048, 00:18:54.252 "data_size": 63488 00:18:54.252 }, 00:18:54.252 { 00:18:54.252 "name": null, 00:18:54.252 "uuid": "3efbd089-9e7f-5410-b1fa-749aa3ac1f0b", 00:18:54.252 "is_configured": false, 00:18:54.252 "data_offset": 2048, 00:18:54.252 "data_size": 63488 00:18:54.252 }, 00:18:54.252 { 00:18:54.252 "name": null, 00:18:54.252 "uuid": "1b231d02-dea3-51dc-8f8b-13bae10f8972", 00:18:54.252 "is_configured": false, 00:18:54.252 "data_offset": 2048, 00:18:54.252 "data_size": 63488 00:18:54.252 } 00:18:54.252 ] 00:18:54.252 }' 00:18:54.252 13:04:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.252 13:04:12 -- common/autotest_common.sh@10 -- # set +x 00:18:54.819 13:04:13 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:54.819 13:04:13 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.077 [2024-06-11 13:04:13.788944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.077 [2024-06-11 13:04:13.789251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.077 [2024-06-11 13:04:13.789329] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:55.077 [2024-06-11 13:04:13.789589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.077 [2024-06-11 13:04:13.790142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.077 [2024-06-11 13:04:13.790331] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.077 [2024-06-11 13:04:13.790522] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:55.077 [2024-06-11 13:04:13.790676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.077 pt2 00:18:55.077 13:04:13 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:55.335 [2024-06-11 13:04:14.045048] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.335 13:04:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.593 13:04:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.593 "name": "raid_bdev1", 00:18:55.593 "uuid": "38766aaa-4c40-4d09-8b6c-2f58d314236c", 00:18:55.593 "strip_size_kb": 64, 00:18:55.593 "state": "configuring", 00:18:55.593 "raid_level": "concat", 00:18:55.593 "superblock": true, 00:18:55.593 "num_base_bdevs": 4, 00:18:55.593 "num_base_bdevs_discovered": 1, 00:18:55.593 "num_base_bdevs_operational": 4, 00:18:55.593 "base_bdevs_list": [ 00:18:55.593 { 00:18:55.593 "name": "pt1", 00:18:55.593 "uuid": "807a1ec2-9aa2-5b05-a96a-fe33142f6170", 00:18:55.593 "is_configured": true, 00:18:55.593 "data_offset": 2048, 00:18:55.593 "data_size": 63488 00:18:55.593 }, 00:18:55.593 { 00:18:55.593 "name": null, 00:18:55.593 "uuid": "af385ac8-e746-5ef5-99e5-762757ef7e7b", 00:18:55.593 "is_configured": false, 00:18:55.593 "data_offset": 2048, 00:18:55.593 "data_size": 63488 00:18:55.593 }, 00:18:55.593 { 00:18:55.593 "name": null, 00:18:55.593 "uuid": "3efbd089-9e7f-5410-b1fa-749aa3ac1f0b", 00:18:55.593 "is_configured": false, 00:18:55.593 "data_offset": 2048, 00:18:55.593 "data_size": 63488 00:18:55.593 }, 00:18:55.593 { 00:18:55.593 "name": null, 00:18:55.593 "uuid": "1b231d02-dea3-51dc-8f8b-13bae10f8972", 00:18:55.593 "is_configured": false, 00:18:55.593 "data_offset": 2048, 00:18:55.593 "data_size": 63488 00:18:55.593 } 00:18:55.593 ] 00:18:55.593 }' 00:18:55.593 13:04:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.593 13:04:14 -- common/autotest_common.sh@10 -- # set +x 00:18:56.160 13:04:14 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:56.160 13:04:14 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:56.160 13:04:14 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:56.418 [2024-06-11 13:04:15.145229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:56.418 [2024-06-11 13:04:15.145492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.418 [2024-06-11 13:04:15.145675] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:56.418 [2024-06-11 13:04:15.145784] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.418 [2024-06-11 13:04:15.146291] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.418 [2024-06-11 13:04:15.146456] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:56.418 [2024-06-11 13:04:15.146639] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:56.418 [2024-06-11 13:04:15.146763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.418 pt2 00:18:56.418 13:04:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:56.418 13:04:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:56.418 13:04:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:56.676 [2024-06-11 13:04:15.377254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:56.676 [2024-06-11 13:04:15.377468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.676 [2024-06-11 13:04:15.377531] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:56.676 [2024-06-11 13:04:15.377765] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.676 [2024-06-11 13:04:15.378225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.676 [2024-06-11 13:04:15.378408] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:56.676 [2024-06-11 13:04:15.378634] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:56.676 [2024-06-11 13:04:15.378757] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:56.676 pt3 00:18:56.676 13:04:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:56.676 13:04:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:56.676 13:04:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:56.934 [2024-06-11 13:04:15.573280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:56.934 [2024-06-11 13:04:15.573505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.934 [2024-06-11 13:04:15.573572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:56.934 [2024-06-11 13:04:15.573816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.934 [2024-06-11 13:04:15.574230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.934 [2024-06-11 13:04:15.574423] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:56.934 [2024-06-11 13:04:15.574625] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:56.934 [2024-06-11 13:04:15.574737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:56.934 [2024-06-11 13:04:15.574900] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:56.934 [2024-06-11 13:04:15.574985] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:56.934 [2024-06-11 13:04:15.575201] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:56.934 [2024-06-11 13:04:15.575578] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:56.934 [2024-06-11 13:04:15.575672] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:56.934 [2024-06-11 13:04:15.575869] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.934 pt4 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.934 13:04:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.934 "name": "raid_bdev1", 00:18:56.934 "uuid": "38766aaa-4c40-4d09-8b6c-2f58d314236c", 00:18:56.934 "strip_size_kb": 64, 00:18:56.934 "state": "online", 00:18:56.934 "raid_level": "concat", 00:18:56.934 "superblock": true, 00:18:56.934 "num_base_bdevs": 4, 00:18:56.935 "num_base_bdevs_discovered": 4, 00:18:56.935 "num_base_bdevs_operational": 4, 00:18:56.935 "base_bdevs_list": [ 00:18:56.935 { 00:18:56.935 "name": "pt1", 00:18:56.935 "uuid": "807a1ec2-9aa2-5b05-a96a-fe33142f6170", 00:18:56.935 "is_configured": true, 00:18:56.935 "data_offset": 2048, 00:18:56.935 "data_size": 63488 00:18:56.935 }, 00:18:56.935 { 00:18:56.935 "name": "pt2", 00:18:56.935 "uuid": "af385ac8-e746-5ef5-99e5-762757ef7e7b", 00:18:56.935 "is_configured": true, 00:18:56.935 "data_offset": 2048, 00:18:56.935 "data_size": 63488 00:18:56.935 }, 00:18:56.935 { 00:18:56.935 "name": "pt3", 00:18:56.935 "uuid": "3efbd089-9e7f-5410-b1fa-749aa3ac1f0b", 00:18:56.935 "is_configured": true, 00:18:56.935 "data_offset": 2048, 00:18:56.935 "data_size": 63488 00:18:56.935 }, 00:18:56.935 { 00:18:56.935 "name": "pt4", 00:18:56.935 "uuid": "1b231d02-dea3-51dc-8f8b-13bae10f8972", 00:18:56.935 "is_configured": true, 00:18:56.935 "data_offset": 2048, 00:18:56.935 "data_size": 63488 00:18:56.935 } 00:18:56.935 ] 00:18:56.935 }' 00:18:56.935 13:04:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.935 13:04:15 -- common/autotest_common.sh@10 -- # set +x 00:18:57.868 13:04:16 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:57.868 13:04:16 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:57.868 [2024-06-11 13:04:16.617876] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.868 13:04:16 -- bdev/bdev_raid.sh@430 -- # '[' 38766aaa-4c40-4d09-8b6c-2f58d314236c '!=' 38766aaa-4c40-4d09-8b6c-2f58d314236c ']' 00:18:57.868 13:04:16 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:57.868 13:04:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:57.868 13:04:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:57.868 13:04:16 -- bdev/bdev_raid.sh@511 -- # killprocess 123658 00:18:57.868 13:04:16 -- common/autotest_common.sh@926 -- # '[' -z 123658 ']' 00:18:57.868 13:04:16 -- common/autotest_common.sh@930 -- # kill -0 123658 00:18:57.868 13:04:16 -- common/autotest_common.sh@931 -- # uname 00:18:57.868 13:04:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:57.868 13:04:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123658 00:18:57.868 killing process with pid 123658 00:18:57.868 13:04:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:57.868 13:04:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:57.868 13:04:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123658' 00:18:57.868 13:04:16 -- common/autotest_common.sh@945 -- # kill 123658 00:18:57.868 13:04:16 -- common/autotest_common.sh@950 -- # wait 123658 00:18:57.868 [2024-06-11 13:04:16.649668] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.868 [2024-06-11 13:04:16.649747] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.868 [2024-06-11 13:04:16.649883] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.868 [2024-06-11 13:04:16.650006] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:58.127 [2024-06-11 13:04:16.911314] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.062 ************************************ 00:18:59.062 END TEST raid_superblock_test 00:18:59.062 ************************************ 00:18:59.062 13:04:17 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:59.062 00:18:59.062 real 0m11.280s 00:18:59.062 user 0m19.933s 00:18:59.063 sys 0m1.200s 00:18:59.063 13:04:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.063 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:18:59.063 13:04:17 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:59.063 13:04:17 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:59.063 13:04:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:59.063 13:04:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.063 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:18:59.063 ************************************ 00:18:59.063 START TEST raid_state_function_test 00:18:59.063 ************************************ 00:18:59.063 13:04:17 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:18:59.063 13:04:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:59.063 13:04:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:59.063 13:04:17 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:59.063 13:04:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:59.063 13:04:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:59.321 13:04:17 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:59.322 13:04:17 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:59.322 13:04:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=123995 00:18:59.322 13:04:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123995' 00:18:59.322 13:04:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:59.322 Process raid pid: 123995 00:18:59.322 13:04:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123995 /var/tmp/spdk-raid.sock 00:18:59.322 13:04:17 -- common/autotest_common.sh@819 -- # '[' -z 123995 ']' 00:18:59.322 13:04:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:59.322 13:04:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.322 13:04:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:59.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:59.322 13:04:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.322 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:18:59.322 [2024-06-11 13:04:17.958170] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:59.322 [2024-06-11 13:04:17.958517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.322 [2024-06-11 13:04:18.102504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.580 [2024-06-11 13:04:18.264717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.839 [2024-06-11 13:04:18.434612] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.099 13:04:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.099 13:04:18 -- common/autotest_common.sh@852 -- # return 0 00:19:00.099 13:04:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:00.357 [2024-06-11 13:04:19.073258] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.358 [2024-06-11 13:04:19.073566] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.358 [2024-06-11 13:04:19.073714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.358 [2024-06-11 13:04:19.073777] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.358 [2024-06-11 13:04:19.074006] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.358 [2024-06-11 13:04:19.074080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.358 [2024-06-11 13:04:19.074269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:00.358 [2024-06-11 13:04:19.074327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.358 13:04:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.616 13:04:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.616 "name": "Existed_Raid", 00:19:00.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.616 "strip_size_kb": 0, 00:19:00.616 "state": "configuring", 00:19:00.616 "raid_level": "raid1", 00:19:00.616 "superblock": false, 00:19:00.616 "num_base_bdevs": 4, 00:19:00.616 "num_base_bdevs_discovered": 0, 00:19:00.616 "num_base_bdevs_operational": 4, 00:19:00.616 "base_bdevs_list": [ 00:19:00.616 { 00:19:00.616 "name": "BaseBdev1", 00:19:00.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.616 "is_configured": false, 00:19:00.616 "data_offset": 0, 00:19:00.616 "data_size": 0 00:19:00.616 }, 00:19:00.616 { 00:19:00.616 "name": "BaseBdev2", 00:19:00.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.616 "is_configured": false, 00:19:00.616 "data_offset": 0, 00:19:00.616 "data_size": 0 00:19:00.616 }, 00:19:00.616 { 00:19:00.616 "name": "BaseBdev3", 00:19:00.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.616 "is_configured": false, 00:19:00.616 "data_offset": 0, 00:19:00.616 "data_size": 0 00:19:00.616 }, 00:19:00.616 { 00:19:00.616 "name": "BaseBdev4", 00:19:00.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.616 "is_configured": false, 00:19:00.616 "data_offset": 0, 00:19:00.617 "data_size": 0 00:19:00.617 } 00:19:00.617 ] 00:19:00.617 }' 00:19:00.617 13:04:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.617 13:04:19 -- common/autotest_common.sh@10 -- # set +x 00:19:01.184 13:04:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.442 [2024-06-11 13:04:20.173374] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.442 [2024-06-11 13:04:20.173553] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:01.442 13:04:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:01.700 [2024-06-11 13:04:20.369439] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:01.700 [2024-06-11 13:04:20.369624] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:01.700 [2024-06-11 13:04:20.369716] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.700 [2024-06-11 13:04:20.369864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.700 [2024-06-11 13:04:20.369952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.700 [2024-06-11 13:04:20.370019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.700 [2024-06-11 13:04:20.370107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:01.700 [2024-06-11 13:04:20.370160] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:01.700 13:04:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:01.958 [2024-06-11 13:04:20.596979] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.958 BaseBdev1 00:19:01.958 13:04:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:01.958 13:04:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:01.958 13:04:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:01.958 13:04:20 -- common/autotest_common.sh@889 -- # local i 00:19:01.958 13:04:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:01.958 13:04:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:01.958 13:04:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.216 13:04:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:02.216 [ 00:19:02.216 { 00:19:02.216 "name": "BaseBdev1", 00:19:02.216 "aliases": [ 00:19:02.216 "1a8fe96e-4eac-4a52-98b1-b5a57c554eed" 00:19:02.216 ], 00:19:02.216 "product_name": "Malloc disk", 00:19:02.216 "block_size": 512, 00:19:02.216 "num_blocks": 65536, 00:19:02.216 "uuid": "1a8fe96e-4eac-4a52-98b1-b5a57c554eed", 00:19:02.216 "assigned_rate_limits": { 00:19:02.216 "rw_ios_per_sec": 0, 00:19:02.216 "rw_mbytes_per_sec": 0, 00:19:02.216 "r_mbytes_per_sec": 0, 00:19:02.216 "w_mbytes_per_sec": 0 00:19:02.216 }, 00:19:02.216 "claimed": true, 00:19:02.216 "claim_type": "exclusive_write", 00:19:02.216 "zoned": false, 00:19:02.216 "supported_io_types": { 00:19:02.216 "read": true, 00:19:02.216 "write": true, 00:19:02.216 "unmap": true, 00:19:02.216 "write_zeroes": true, 00:19:02.216 "flush": true, 00:19:02.216 "reset": true, 00:19:02.216 "compare": false, 00:19:02.216 "compare_and_write": false, 00:19:02.216 "abort": true, 00:19:02.216 "nvme_admin": false, 00:19:02.216 "nvme_io": false 00:19:02.216 }, 00:19:02.216 "memory_domains": [ 00:19:02.216 { 00:19:02.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.216 "dma_device_type": 2 00:19:02.216 } 00:19:02.216 ], 00:19:02.216 "driver_specific": {} 00:19:02.216 } 00:19:02.216 ] 00:19:02.475 13:04:21 -- common/autotest_common.sh@895 -- # return 0 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.475 "name": "Existed_Raid", 00:19:02.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.475 "strip_size_kb": 0, 00:19:02.475 "state": "configuring", 00:19:02.475 "raid_level": "raid1", 00:19:02.475 "superblock": false, 00:19:02.475 "num_base_bdevs": 4, 00:19:02.475 "num_base_bdevs_discovered": 1, 00:19:02.475 "num_base_bdevs_operational": 4, 00:19:02.475 "base_bdevs_list": [ 00:19:02.475 { 00:19:02.475 "name": "BaseBdev1", 00:19:02.475 "uuid": "1a8fe96e-4eac-4a52-98b1-b5a57c554eed", 00:19:02.475 "is_configured": true, 00:19:02.475 "data_offset": 0, 00:19:02.475 "data_size": 65536 00:19:02.475 }, 00:19:02.475 { 00:19:02.475 "name": "BaseBdev2", 00:19:02.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.475 "is_configured": false, 00:19:02.475 "data_offset": 0, 00:19:02.475 "data_size": 0 00:19:02.475 }, 00:19:02.475 { 00:19:02.475 "name": "BaseBdev3", 00:19:02.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.475 "is_configured": false, 00:19:02.475 "data_offset": 0, 00:19:02.475 "data_size": 0 00:19:02.475 }, 00:19:02.475 { 00:19:02.475 "name": "BaseBdev4", 00:19:02.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.475 "is_configured": false, 00:19:02.475 "data_offset": 0, 00:19:02.475 "data_size": 0 00:19:02.475 } 00:19:02.475 ] 00:19:02.475 }' 00:19:02.475 13:04:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.475 13:04:21 -- common/autotest_common.sh@10 -- # set +x 00:19:03.409 13:04:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:03.409 [2024-06-11 13:04:22.121407] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.409 [2024-06-11 13:04:22.121644] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:03.409 13:04:22 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:03.409 13:04:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:03.677 [2024-06-11 13:04:22.317451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.677 [2024-06-11 13:04:22.319103] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:03.677 [2024-06-11 13:04:22.319294] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:03.677 [2024-06-11 13:04:22.319412] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:03.677 [2024-06-11 13:04:22.319468] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:03.677 [2024-06-11 13:04:22.319552] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:03.677 [2024-06-11 13:04:22.319678] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.677 13:04:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.951 13:04:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.951 "name": "Existed_Raid", 00:19:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.951 "strip_size_kb": 0, 00:19:03.951 "state": "configuring", 00:19:03.951 "raid_level": "raid1", 00:19:03.951 "superblock": false, 00:19:03.951 "num_base_bdevs": 4, 00:19:03.951 "num_base_bdevs_discovered": 1, 00:19:03.951 "num_base_bdevs_operational": 4, 00:19:03.951 "base_bdevs_list": [ 00:19:03.951 { 00:19:03.951 "name": "BaseBdev1", 00:19:03.951 "uuid": "1a8fe96e-4eac-4a52-98b1-b5a57c554eed", 00:19:03.951 "is_configured": true, 00:19:03.951 "data_offset": 0, 00:19:03.951 "data_size": 65536 00:19:03.951 }, 00:19:03.951 { 00:19:03.951 "name": "BaseBdev2", 00:19:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.951 "is_configured": false, 00:19:03.951 "data_offset": 0, 00:19:03.951 "data_size": 0 00:19:03.951 }, 00:19:03.951 { 00:19:03.951 "name": "BaseBdev3", 00:19:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.951 "is_configured": false, 00:19:03.951 "data_offset": 0, 00:19:03.951 "data_size": 0 00:19:03.951 }, 00:19:03.951 { 00:19:03.951 "name": "BaseBdev4", 00:19:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.951 "is_configured": false, 00:19:03.951 "data_offset": 0, 00:19:03.951 "data_size": 0 00:19:03.951 } 00:19:03.951 ] 00:19:03.951 }' 00:19:03.951 13:04:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.951 13:04:22 -- common/autotest_common.sh@10 -- # set +x 00:19:04.519 13:04:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:04.778 [2024-06-11 13:04:23.491956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:04.778 BaseBdev2 00:19:04.778 13:04:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:04.778 13:04:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:04.778 13:04:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:04.778 13:04:23 -- common/autotest_common.sh@889 -- # local i 00:19:04.778 13:04:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:04.778 13:04:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:04.778 13:04:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.046 13:04:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:05.306 [ 00:19:05.306 { 00:19:05.306 "name": "BaseBdev2", 00:19:05.306 "aliases": [ 00:19:05.306 "d218227a-4ec9-4276-87e2-b4589a5e90d6" 00:19:05.306 ], 00:19:05.306 "product_name": "Malloc disk", 00:19:05.306 "block_size": 512, 00:19:05.306 "num_blocks": 65536, 00:19:05.306 "uuid": "d218227a-4ec9-4276-87e2-b4589a5e90d6", 00:19:05.306 "assigned_rate_limits": { 00:19:05.306 "rw_ios_per_sec": 0, 00:19:05.306 "rw_mbytes_per_sec": 0, 00:19:05.306 "r_mbytes_per_sec": 0, 00:19:05.306 "w_mbytes_per_sec": 0 00:19:05.306 }, 00:19:05.306 "claimed": true, 00:19:05.306 "claim_type": "exclusive_write", 00:19:05.306 "zoned": false, 00:19:05.306 "supported_io_types": { 00:19:05.306 "read": true, 00:19:05.306 "write": true, 00:19:05.306 "unmap": true, 00:19:05.306 "write_zeroes": true, 00:19:05.306 "flush": true, 00:19:05.306 "reset": true, 00:19:05.306 "compare": false, 00:19:05.306 "compare_and_write": false, 00:19:05.306 "abort": true, 00:19:05.306 "nvme_admin": false, 00:19:05.306 "nvme_io": false 00:19:05.306 }, 00:19:05.306 "memory_domains": [ 00:19:05.306 { 00:19:05.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.306 "dma_device_type": 2 00:19:05.306 } 00:19:05.306 ], 00:19:05.306 "driver_specific": {} 00:19:05.306 } 00:19:05.306 ] 00:19:05.306 13:04:23 -- common/autotest_common.sh@895 -- # return 0 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.306 13:04:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.565 13:04:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.565 "name": "Existed_Raid", 00:19:05.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.565 "strip_size_kb": 0, 00:19:05.565 "state": "configuring", 00:19:05.565 "raid_level": "raid1", 00:19:05.565 "superblock": false, 00:19:05.565 "num_base_bdevs": 4, 00:19:05.565 "num_base_bdevs_discovered": 2, 00:19:05.565 "num_base_bdevs_operational": 4, 00:19:05.565 "base_bdevs_list": [ 00:19:05.565 { 00:19:05.565 "name": "BaseBdev1", 00:19:05.565 "uuid": "1a8fe96e-4eac-4a52-98b1-b5a57c554eed", 00:19:05.565 "is_configured": true, 00:19:05.565 "data_offset": 0, 00:19:05.565 "data_size": 65536 00:19:05.565 }, 00:19:05.565 { 00:19:05.565 "name": "BaseBdev2", 00:19:05.565 "uuid": "d218227a-4ec9-4276-87e2-b4589a5e90d6", 00:19:05.565 "is_configured": true, 00:19:05.565 "data_offset": 0, 00:19:05.565 "data_size": 65536 00:19:05.565 }, 00:19:05.565 { 00:19:05.565 "name": "BaseBdev3", 00:19:05.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.565 "is_configured": false, 00:19:05.565 "data_offset": 0, 00:19:05.565 "data_size": 0 00:19:05.565 }, 00:19:05.565 { 00:19:05.565 "name": "BaseBdev4", 00:19:05.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.565 "is_configured": false, 00:19:05.565 "data_offset": 0, 00:19:05.565 "data_size": 0 00:19:05.565 } 00:19:05.565 ] 00:19:05.565 }' 00:19:05.565 13:04:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.565 13:04:24 -- common/autotest_common.sh@10 -- # set +x 00:19:06.132 13:04:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:06.391 [2024-06-11 13:04:25.095559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:06.391 BaseBdev3 00:19:06.391 13:04:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:06.391 13:04:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:06.391 13:04:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:06.391 13:04:25 -- common/autotest_common.sh@889 -- # local i 00:19:06.391 13:04:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:06.391 13:04:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:06.391 13:04:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.649 13:04:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:06.649 [ 00:19:06.649 { 00:19:06.649 "name": "BaseBdev3", 00:19:06.649 "aliases": [ 00:19:06.649 "b0339b97-5eb5-480e-bd62-258ec2a1fa51" 00:19:06.649 ], 00:19:06.649 "product_name": "Malloc disk", 00:19:06.649 "block_size": 512, 00:19:06.649 "num_blocks": 65536, 00:19:06.649 "uuid": "b0339b97-5eb5-480e-bd62-258ec2a1fa51", 00:19:06.649 "assigned_rate_limits": { 00:19:06.649 "rw_ios_per_sec": 0, 00:19:06.649 "rw_mbytes_per_sec": 0, 00:19:06.649 "r_mbytes_per_sec": 0, 00:19:06.649 "w_mbytes_per_sec": 0 00:19:06.649 }, 00:19:06.649 "claimed": true, 00:19:06.649 "claim_type": "exclusive_write", 00:19:06.649 "zoned": false, 00:19:06.649 "supported_io_types": { 00:19:06.649 "read": true, 00:19:06.649 "write": true, 00:19:06.649 "unmap": true, 00:19:06.649 "write_zeroes": true, 00:19:06.649 "flush": true, 00:19:06.649 "reset": true, 00:19:06.649 "compare": false, 00:19:06.649 "compare_and_write": false, 00:19:06.649 "abort": true, 00:19:06.649 "nvme_admin": false, 00:19:06.649 "nvme_io": false 00:19:06.649 }, 00:19:06.649 "memory_domains": [ 00:19:06.649 { 00:19:06.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.649 "dma_device_type": 2 00:19:06.649 } 00:19:06.649 ], 00:19:06.649 "driver_specific": {} 00:19:06.649 } 00:19:06.649 ] 00:19:06.649 13:04:25 -- common/autotest_common.sh@895 -- # return 0 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.649 13:04:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.650 13:04:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.650 13:04:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.650 13:04:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.650 13:04:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.907 13:04:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.907 "name": "Existed_Raid", 00:19:06.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.907 "strip_size_kb": 0, 00:19:06.907 "state": "configuring", 00:19:06.907 "raid_level": "raid1", 00:19:06.907 "superblock": false, 00:19:06.907 "num_base_bdevs": 4, 00:19:06.907 "num_base_bdevs_discovered": 3, 00:19:06.907 "num_base_bdevs_operational": 4, 00:19:06.907 "base_bdevs_list": [ 00:19:06.907 { 00:19:06.907 "name": "BaseBdev1", 00:19:06.907 "uuid": "1a8fe96e-4eac-4a52-98b1-b5a57c554eed", 00:19:06.907 "is_configured": true, 00:19:06.907 "data_offset": 0, 00:19:06.907 "data_size": 65536 00:19:06.907 }, 00:19:06.907 { 00:19:06.907 "name": "BaseBdev2", 00:19:06.907 "uuid": "d218227a-4ec9-4276-87e2-b4589a5e90d6", 00:19:06.907 "is_configured": true, 00:19:06.907 "data_offset": 0, 00:19:06.907 "data_size": 65536 00:19:06.907 }, 00:19:06.907 { 00:19:06.907 "name": "BaseBdev3", 00:19:06.907 "uuid": "b0339b97-5eb5-480e-bd62-258ec2a1fa51", 00:19:06.907 "is_configured": true, 00:19:06.907 "data_offset": 0, 00:19:06.907 "data_size": 65536 00:19:06.907 }, 00:19:06.907 { 00:19:06.907 "name": "BaseBdev4", 00:19:06.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.907 "is_configured": false, 00:19:06.907 "data_offset": 0, 00:19:06.907 "data_size": 0 00:19:06.907 } 00:19:06.907 ] 00:19:06.907 }' 00:19:06.907 13:04:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.907 13:04:25 -- common/autotest_common.sh@10 -- # set +x 00:19:07.474 13:04:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:07.733 [2024-06-11 13:04:26.526046] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:07.733 [2024-06-11 13:04:26.526284] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:07.733 [2024-06-11 13:04:26.526326] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:07.733 [2024-06-11 13:04:26.526570] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:07.733 [2024-06-11 13:04:26.527064] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:07.733 [2024-06-11 13:04:26.527215] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:07.733 [2024-06-11 13:04:26.527567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.733 BaseBdev4 00:19:07.733 13:04:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:07.733 13:04:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:07.733 13:04:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:07.733 13:04:26 -- common/autotest_common.sh@889 -- # local i 00:19:07.733 13:04:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:07.733 13:04:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:07.733 13:04:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.991 13:04:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:08.250 [ 00:19:08.250 { 00:19:08.250 "name": "BaseBdev4", 00:19:08.250 "aliases": [ 00:19:08.250 "cb213b32-fc42-49f8-af8c-0ce8b1d11b47" 00:19:08.250 ], 00:19:08.250 "product_name": "Malloc disk", 00:19:08.250 "block_size": 512, 00:19:08.250 "num_blocks": 65536, 00:19:08.250 "uuid": "cb213b32-fc42-49f8-af8c-0ce8b1d11b47", 00:19:08.250 "assigned_rate_limits": { 00:19:08.250 "rw_ios_per_sec": 0, 00:19:08.250 "rw_mbytes_per_sec": 0, 00:19:08.250 "r_mbytes_per_sec": 0, 00:19:08.250 "w_mbytes_per_sec": 0 00:19:08.250 }, 00:19:08.250 "claimed": true, 00:19:08.250 "claim_type": "exclusive_write", 00:19:08.250 "zoned": false, 00:19:08.250 "supported_io_types": { 00:19:08.250 "read": true, 00:19:08.250 "write": true, 00:19:08.250 "unmap": true, 00:19:08.250 "write_zeroes": true, 00:19:08.250 "flush": true, 00:19:08.250 "reset": true, 00:19:08.250 "compare": false, 00:19:08.250 "compare_and_write": false, 00:19:08.250 "abort": true, 00:19:08.250 "nvme_admin": false, 00:19:08.250 "nvme_io": false 00:19:08.250 }, 00:19:08.250 "memory_domains": [ 00:19:08.250 { 00:19:08.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.250 "dma_device_type": 2 00:19:08.250 } 00:19:08.250 ], 00:19:08.250 "driver_specific": {} 00:19:08.250 } 00:19:08.250 ] 00:19:08.250 13:04:27 -- common/autotest_common.sh@895 -- # return 0 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.250 13:04:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.508 13:04:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.509 "name": "Existed_Raid", 00:19:08.509 "uuid": "c047a7d4-2e7f-4a24-80e2-fcaebc686c46", 00:19:08.509 "strip_size_kb": 0, 00:19:08.509 "state": "online", 00:19:08.509 "raid_level": "raid1", 00:19:08.509 "superblock": false, 00:19:08.509 "num_base_bdevs": 4, 00:19:08.509 "num_base_bdevs_discovered": 4, 00:19:08.509 "num_base_bdevs_operational": 4, 00:19:08.509 "base_bdevs_list": [ 00:19:08.509 { 00:19:08.509 "name": "BaseBdev1", 00:19:08.509 "uuid": "1a8fe96e-4eac-4a52-98b1-b5a57c554eed", 00:19:08.509 "is_configured": true, 00:19:08.509 "data_offset": 0, 00:19:08.509 "data_size": 65536 00:19:08.509 }, 00:19:08.509 { 00:19:08.509 "name": "BaseBdev2", 00:19:08.509 "uuid": "d218227a-4ec9-4276-87e2-b4589a5e90d6", 00:19:08.509 "is_configured": true, 00:19:08.509 "data_offset": 0, 00:19:08.509 "data_size": 65536 00:19:08.509 }, 00:19:08.509 { 00:19:08.509 "name": "BaseBdev3", 00:19:08.509 "uuid": "b0339b97-5eb5-480e-bd62-258ec2a1fa51", 00:19:08.509 "is_configured": true, 00:19:08.509 "data_offset": 0, 00:19:08.509 "data_size": 65536 00:19:08.509 }, 00:19:08.509 { 00:19:08.509 "name": "BaseBdev4", 00:19:08.509 "uuid": "cb213b32-fc42-49f8-af8c-0ce8b1d11b47", 00:19:08.509 "is_configured": true, 00:19:08.509 "data_offset": 0, 00:19:08.509 "data_size": 65536 00:19:08.509 } 00:19:08.509 ] 00:19:08.509 }' 00:19:08.509 13:04:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.509 13:04:27 -- common/autotest_common.sh@10 -- # set +x 00:19:09.076 13:04:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:09.335 [2024-06-11 13:04:28.066280] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.335 13:04:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.593 13:04:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.593 "name": "Existed_Raid", 00:19:09.593 "uuid": "c047a7d4-2e7f-4a24-80e2-fcaebc686c46", 00:19:09.593 "strip_size_kb": 0, 00:19:09.593 "state": "online", 00:19:09.593 "raid_level": "raid1", 00:19:09.593 "superblock": false, 00:19:09.593 "num_base_bdevs": 4, 00:19:09.594 "num_base_bdevs_discovered": 3, 00:19:09.594 "num_base_bdevs_operational": 3, 00:19:09.594 "base_bdevs_list": [ 00:19:09.594 { 00:19:09.594 "name": null, 00:19:09.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.594 "is_configured": false, 00:19:09.594 "data_offset": 0, 00:19:09.594 "data_size": 65536 00:19:09.594 }, 00:19:09.594 { 00:19:09.594 "name": "BaseBdev2", 00:19:09.594 "uuid": "d218227a-4ec9-4276-87e2-b4589a5e90d6", 00:19:09.594 "is_configured": true, 00:19:09.594 "data_offset": 0, 00:19:09.594 "data_size": 65536 00:19:09.594 }, 00:19:09.594 { 00:19:09.594 "name": "BaseBdev3", 00:19:09.594 "uuid": "b0339b97-5eb5-480e-bd62-258ec2a1fa51", 00:19:09.594 "is_configured": true, 00:19:09.594 "data_offset": 0, 00:19:09.594 "data_size": 65536 00:19:09.594 }, 00:19:09.594 { 00:19:09.594 "name": "BaseBdev4", 00:19:09.594 "uuid": "cb213b32-fc42-49f8-af8c-0ce8b1d11b47", 00:19:09.594 "is_configured": true, 00:19:09.594 "data_offset": 0, 00:19:09.594 "data_size": 65536 00:19:09.594 } 00:19:09.594 ] 00:19:09.594 }' 00:19:09.594 13:04:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.594 13:04:28 -- common/autotest_common.sh@10 -- # set +x 00:19:10.161 13:04:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:10.161 13:04:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:10.161 13:04:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.161 13:04:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:10.419 13:04:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:10.419 13:04:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.419 13:04:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:10.678 [2024-06-11 13:04:29.462290] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.936 13:04:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:10.936 13:04:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:10.936 13:04:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.936 13:04:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:11.194 13:04:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:11.194 13:04:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.194 13:04:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:11.194 [2024-06-11 13:04:29.989780] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:11.453 13:04:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:11.453 13:04:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:11.453 13:04:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.453 13:04:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:11.711 13:04:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:11.711 13:04:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.712 13:04:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:11.712 [2024-06-11 13:04:30.539803] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:11.712 [2024-06-11 13:04:30.539973] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:11.712 [2024-06-11 13:04:30.540162] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.970 [2024-06-11 13:04:30.610621] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.970 [2024-06-11 13:04:30.610781] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:11.970 13:04:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:11.970 13:04:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:11.970 13:04:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.970 13:04:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:12.229 13:04:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:12.229 13:04:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:12.229 13:04:30 -- bdev/bdev_raid.sh@287 -- # killprocess 123995 00:19:12.229 13:04:30 -- common/autotest_common.sh@926 -- # '[' -z 123995 ']' 00:19:12.229 13:04:30 -- common/autotest_common.sh@930 -- # kill -0 123995 00:19:12.229 13:04:30 -- common/autotest_common.sh@931 -- # uname 00:19:12.229 13:04:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:12.229 13:04:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123995 00:19:12.229 killing process with pid 123995 00:19:12.230 13:04:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:12.230 13:04:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:12.230 13:04:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123995' 00:19:12.230 13:04:30 -- common/autotest_common.sh@945 -- # kill 123995 00:19:12.230 13:04:30 -- common/autotest_common.sh@950 -- # wait 123995 00:19:12.230 [2024-06-11 13:04:30.887977] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.230 [2024-06-11 13:04:30.888089] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.166 ************************************ 00:19:13.166 END TEST raid_state_function_test 00:19:13.166 ************************************ 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:13.166 00:19:13.166 real 0m13.950s 00:19:13.166 user 0m24.976s 00:19:13.166 sys 0m1.633s 00:19:13.166 13:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.166 13:04:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:13.166 13:04:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:13.166 13:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:13.166 13:04:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.166 ************************************ 00:19:13.166 START TEST raid_state_function_test_sb 00:19:13.166 ************************************ 00:19:13.166 13:04:31 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=124448 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124448' 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:13.166 Process raid pid: 124448 00:19:13.166 13:04:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124448 /var/tmp/spdk-raid.sock 00:19:13.166 13:04:31 -- common/autotest_common.sh@819 -- # '[' -z 124448 ']' 00:19:13.166 13:04:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:13.166 13:04:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:13.166 13:04:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:13.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:13.166 13:04:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:13.166 13:04:31 -- common/autotest_common.sh@10 -- # set +x 00:19:13.166 [2024-06-11 13:04:31.977868] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:13.166 [2024-06-11 13:04:31.978242] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.425 [2024-06-11 13:04:32.149343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.683 [2024-06-11 13:04:32.360837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.941 [2024-06-11 13:04:32.535270] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.202 13:04:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:14.202 13:04:32 -- common/autotest_common.sh@852 -- # return 0 00:19:14.202 13:04:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:14.463 [2024-06-11 13:04:33.123757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.463 [2024-06-11 13:04:33.123960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.463 [2024-06-11 13:04:33.124071] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.463 [2024-06-11 13:04:33.124129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.463 [2024-06-11 13:04:33.124230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:14.463 [2024-06-11 13:04:33.124299] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:14.463 [2024-06-11 13:04:33.124402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:14.463 [2024-06-11 13:04:33.124458] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.463 13:04:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.721 13:04:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.721 "name": "Existed_Raid", 00:19:14.721 "uuid": "0087f5fd-0b3a-4c71-8fdf-617b45205164", 00:19:14.721 "strip_size_kb": 0, 00:19:14.721 "state": "configuring", 00:19:14.721 "raid_level": "raid1", 00:19:14.721 "superblock": true, 00:19:14.721 "num_base_bdevs": 4, 00:19:14.721 "num_base_bdevs_discovered": 0, 00:19:14.721 "num_base_bdevs_operational": 4, 00:19:14.721 "base_bdevs_list": [ 00:19:14.721 { 00:19:14.721 "name": "BaseBdev1", 00:19:14.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.721 "is_configured": false, 00:19:14.721 "data_offset": 0, 00:19:14.721 "data_size": 0 00:19:14.721 }, 00:19:14.721 { 00:19:14.721 "name": "BaseBdev2", 00:19:14.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.721 "is_configured": false, 00:19:14.721 "data_offset": 0, 00:19:14.721 "data_size": 0 00:19:14.721 }, 00:19:14.721 { 00:19:14.721 "name": "BaseBdev3", 00:19:14.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.721 "is_configured": false, 00:19:14.721 "data_offset": 0, 00:19:14.722 "data_size": 0 00:19:14.722 }, 00:19:14.722 { 00:19:14.722 "name": "BaseBdev4", 00:19:14.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.722 "is_configured": false, 00:19:14.722 "data_offset": 0, 00:19:14.722 "data_size": 0 00:19:14.722 } 00:19:14.722 ] 00:19:14.722 }' 00:19:14.722 13:04:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.722 13:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:15.288 13:04:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:15.546 [2024-06-11 13:04:34.375834] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.546 [2024-06-11 13:04:34.376017] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:15.805 13:04:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:15.805 [2024-06-11 13:04:34.571916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:15.805 [2024-06-11 13:04:34.572151] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:15.806 [2024-06-11 13:04:34.572251] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.806 [2024-06-11 13:04:34.572316] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.806 [2024-06-11 13:04:34.572480] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:15.806 [2024-06-11 13:04:34.572553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:15.806 [2024-06-11 13:04:34.572642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:15.806 [2024-06-11 13:04:34.572699] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:15.806 13:04:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:16.064 [2024-06-11 13:04:34.792280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.064 BaseBdev1 00:19:16.064 13:04:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:16.064 13:04:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:16.064 13:04:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:16.064 13:04:34 -- common/autotest_common.sh@889 -- # local i 00:19:16.064 13:04:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:16.064 13:04:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:16.064 13:04:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.322 13:04:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:16.581 [ 00:19:16.581 { 00:19:16.581 "name": "BaseBdev1", 00:19:16.581 "aliases": [ 00:19:16.581 "4f262d7c-7722-46a5-85d3-44434c499de4" 00:19:16.581 ], 00:19:16.581 "product_name": "Malloc disk", 00:19:16.581 "block_size": 512, 00:19:16.581 "num_blocks": 65536, 00:19:16.581 "uuid": "4f262d7c-7722-46a5-85d3-44434c499de4", 00:19:16.581 "assigned_rate_limits": { 00:19:16.581 "rw_ios_per_sec": 0, 00:19:16.581 "rw_mbytes_per_sec": 0, 00:19:16.581 "r_mbytes_per_sec": 0, 00:19:16.581 "w_mbytes_per_sec": 0 00:19:16.581 }, 00:19:16.581 "claimed": true, 00:19:16.581 "claim_type": "exclusive_write", 00:19:16.581 "zoned": false, 00:19:16.581 "supported_io_types": { 00:19:16.581 "read": true, 00:19:16.581 "write": true, 00:19:16.581 "unmap": true, 00:19:16.581 "write_zeroes": true, 00:19:16.581 "flush": true, 00:19:16.581 "reset": true, 00:19:16.581 "compare": false, 00:19:16.581 "compare_and_write": false, 00:19:16.581 "abort": true, 00:19:16.581 "nvme_admin": false, 00:19:16.581 "nvme_io": false 00:19:16.581 }, 00:19:16.581 "memory_domains": [ 00:19:16.581 { 00:19:16.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.581 "dma_device_type": 2 00:19:16.581 } 00:19:16.581 ], 00:19:16.581 "driver_specific": {} 00:19:16.581 } 00:19:16.581 ] 00:19:16.581 13:04:35 -- common/autotest_common.sh@895 -- # return 0 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.581 13:04:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.840 13:04:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.840 "name": "Existed_Raid", 00:19:16.840 "uuid": "2db2ad25-2d28-457f-9f7b-6029da8690cf", 00:19:16.840 "strip_size_kb": 0, 00:19:16.840 "state": "configuring", 00:19:16.840 "raid_level": "raid1", 00:19:16.840 "superblock": true, 00:19:16.840 "num_base_bdevs": 4, 00:19:16.840 "num_base_bdevs_discovered": 1, 00:19:16.840 "num_base_bdevs_operational": 4, 00:19:16.840 "base_bdevs_list": [ 00:19:16.840 { 00:19:16.840 "name": "BaseBdev1", 00:19:16.840 "uuid": "4f262d7c-7722-46a5-85d3-44434c499de4", 00:19:16.840 "is_configured": true, 00:19:16.840 "data_offset": 2048, 00:19:16.840 "data_size": 63488 00:19:16.840 }, 00:19:16.840 { 00:19:16.840 "name": "BaseBdev2", 00:19:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.840 "is_configured": false, 00:19:16.840 "data_offset": 0, 00:19:16.840 "data_size": 0 00:19:16.840 }, 00:19:16.840 { 00:19:16.840 "name": "BaseBdev3", 00:19:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.840 "is_configured": false, 00:19:16.840 "data_offset": 0, 00:19:16.840 "data_size": 0 00:19:16.840 }, 00:19:16.840 { 00:19:16.840 "name": "BaseBdev4", 00:19:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.840 "is_configured": false, 00:19:16.840 "data_offset": 0, 00:19:16.840 "data_size": 0 00:19:16.840 } 00:19:16.840 ] 00:19:16.840 }' 00:19:16.840 13:04:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.840 13:04:35 -- common/autotest_common.sh@10 -- # set +x 00:19:17.407 13:04:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:17.665 [2024-06-11 13:04:36.372678] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:17.665 [2024-06-11 13:04:36.372867] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:17.665 13:04:36 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:17.666 13:04:36 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:17.925 13:04:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:18.183 BaseBdev1 00:19:18.183 13:04:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:18.183 13:04:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:18.183 13:04:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:18.183 13:04:36 -- common/autotest_common.sh@889 -- # local i 00:19:18.183 13:04:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:18.183 13:04:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:18.183 13:04:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.442 13:04:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:18.701 [ 00:19:18.701 { 00:19:18.701 "name": "BaseBdev1", 00:19:18.701 "aliases": [ 00:19:18.701 "6c3d51bc-8501-4483-9bf4-3d237382fc25" 00:19:18.701 ], 00:19:18.701 "product_name": "Malloc disk", 00:19:18.701 "block_size": 512, 00:19:18.701 "num_blocks": 65536, 00:19:18.701 "uuid": "6c3d51bc-8501-4483-9bf4-3d237382fc25", 00:19:18.701 "assigned_rate_limits": { 00:19:18.701 "rw_ios_per_sec": 0, 00:19:18.701 "rw_mbytes_per_sec": 0, 00:19:18.701 "r_mbytes_per_sec": 0, 00:19:18.701 "w_mbytes_per_sec": 0 00:19:18.701 }, 00:19:18.701 "claimed": false, 00:19:18.701 "zoned": false, 00:19:18.701 "supported_io_types": { 00:19:18.701 "read": true, 00:19:18.701 "write": true, 00:19:18.701 "unmap": true, 00:19:18.701 "write_zeroes": true, 00:19:18.701 "flush": true, 00:19:18.701 "reset": true, 00:19:18.701 "compare": false, 00:19:18.701 "compare_and_write": false, 00:19:18.701 "abort": true, 00:19:18.702 "nvme_admin": false, 00:19:18.702 "nvme_io": false 00:19:18.702 }, 00:19:18.702 "memory_domains": [ 00:19:18.702 { 00:19:18.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.702 "dma_device_type": 2 00:19:18.702 } 00:19:18.702 ], 00:19:18.702 "driver_specific": {} 00:19:18.702 } 00:19:18.702 ] 00:19:18.702 13:04:37 -- common/autotest_common.sh@895 -- # return 0 00:19:18.702 13:04:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:18.960 [2024-06-11 13:04:37.558466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.960 [2024-06-11 13:04:37.560572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:18.960 [2024-06-11 13:04:37.560821] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:18.960 [2024-06-11 13:04:37.560928] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:18.960 [2024-06-11 13:04:37.560990] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:18.960 [2024-06-11 13:04:37.561095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:18.960 [2024-06-11 13:04:37.561165] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.960 13:04:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.961 13:04:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.961 13:04:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.961 13:04:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.961 13:04:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.961 "name": "Existed_Raid", 00:19:18.961 "uuid": "514bb1e2-038a-45f5-9e25-9ae4c731cbd5", 00:19:18.961 "strip_size_kb": 0, 00:19:18.961 "state": "configuring", 00:19:18.961 "raid_level": "raid1", 00:19:18.961 "superblock": true, 00:19:18.961 "num_base_bdevs": 4, 00:19:18.961 "num_base_bdevs_discovered": 1, 00:19:18.961 "num_base_bdevs_operational": 4, 00:19:18.961 "base_bdevs_list": [ 00:19:18.961 { 00:19:18.961 "name": "BaseBdev1", 00:19:18.961 "uuid": "6c3d51bc-8501-4483-9bf4-3d237382fc25", 00:19:18.961 "is_configured": true, 00:19:18.961 "data_offset": 2048, 00:19:18.961 "data_size": 63488 00:19:18.961 }, 00:19:18.961 { 00:19:18.961 "name": "BaseBdev2", 00:19:18.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.961 "is_configured": false, 00:19:18.961 "data_offset": 0, 00:19:18.961 "data_size": 0 00:19:18.961 }, 00:19:18.961 { 00:19:18.961 "name": "BaseBdev3", 00:19:18.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.961 "is_configured": false, 00:19:18.961 "data_offset": 0, 00:19:18.961 "data_size": 0 00:19:18.961 }, 00:19:18.961 { 00:19:18.961 "name": "BaseBdev4", 00:19:18.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.961 "is_configured": false, 00:19:18.961 "data_offset": 0, 00:19:18.961 "data_size": 0 00:19:18.961 } 00:19:18.961 ] 00:19:18.961 }' 00:19:18.961 13:04:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.961 13:04:37 -- common/autotest_common.sh@10 -- # set +x 00:19:19.895 13:04:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:20.154 [2024-06-11 13:04:38.768871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.154 BaseBdev2 00:19:20.154 13:04:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:20.154 13:04:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:20.154 13:04:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:20.154 13:04:38 -- common/autotest_common.sh@889 -- # local i 00:19:20.154 13:04:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:20.154 13:04:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:20.154 13:04:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.154 13:04:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:20.412 [ 00:19:20.412 { 00:19:20.412 "name": "BaseBdev2", 00:19:20.412 "aliases": [ 00:19:20.412 "d264349f-57f7-47b7-8f2d-d9537a667655" 00:19:20.412 ], 00:19:20.412 "product_name": "Malloc disk", 00:19:20.412 "block_size": 512, 00:19:20.412 "num_blocks": 65536, 00:19:20.412 "uuid": "d264349f-57f7-47b7-8f2d-d9537a667655", 00:19:20.412 "assigned_rate_limits": { 00:19:20.412 "rw_ios_per_sec": 0, 00:19:20.412 "rw_mbytes_per_sec": 0, 00:19:20.412 "r_mbytes_per_sec": 0, 00:19:20.412 "w_mbytes_per_sec": 0 00:19:20.412 }, 00:19:20.412 "claimed": true, 00:19:20.412 "claim_type": "exclusive_write", 00:19:20.412 "zoned": false, 00:19:20.412 "supported_io_types": { 00:19:20.412 "read": true, 00:19:20.412 "write": true, 00:19:20.412 "unmap": true, 00:19:20.412 "write_zeroes": true, 00:19:20.412 "flush": true, 00:19:20.412 "reset": true, 00:19:20.412 "compare": false, 00:19:20.412 "compare_and_write": false, 00:19:20.412 "abort": true, 00:19:20.412 "nvme_admin": false, 00:19:20.412 "nvme_io": false 00:19:20.412 }, 00:19:20.412 "memory_domains": [ 00:19:20.412 { 00:19:20.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.412 "dma_device_type": 2 00:19:20.412 } 00:19:20.412 ], 00:19:20.412 "driver_specific": {} 00:19:20.412 } 00:19:20.412 ] 00:19:20.412 13:04:39 -- common/autotest_common.sh@895 -- # return 0 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.412 13:04:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.670 13:04:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.670 "name": "Existed_Raid", 00:19:20.670 "uuid": "514bb1e2-038a-45f5-9e25-9ae4c731cbd5", 00:19:20.670 "strip_size_kb": 0, 00:19:20.670 "state": "configuring", 00:19:20.670 "raid_level": "raid1", 00:19:20.670 "superblock": true, 00:19:20.670 "num_base_bdevs": 4, 00:19:20.670 "num_base_bdevs_discovered": 2, 00:19:20.670 "num_base_bdevs_operational": 4, 00:19:20.670 "base_bdevs_list": [ 00:19:20.670 { 00:19:20.670 "name": "BaseBdev1", 00:19:20.670 "uuid": "6c3d51bc-8501-4483-9bf4-3d237382fc25", 00:19:20.670 "is_configured": true, 00:19:20.670 "data_offset": 2048, 00:19:20.670 "data_size": 63488 00:19:20.670 }, 00:19:20.670 { 00:19:20.670 "name": "BaseBdev2", 00:19:20.670 "uuid": "d264349f-57f7-47b7-8f2d-d9537a667655", 00:19:20.670 "is_configured": true, 00:19:20.670 "data_offset": 2048, 00:19:20.670 "data_size": 63488 00:19:20.670 }, 00:19:20.670 { 00:19:20.670 "name": "BaseBdev3", 00:19:20.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.670 "is_configured": false, 00:19:20.670 "data_offset": 0, 00:19:20.670 "data_size": 0 00:19:20.670 }, 00:19:20.670 { 00:19:20.670 "name": "BaseBdev4", 00:19:20.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.670 "is_configured": false, 00:19:20.670 "data_offset": 0, 00:19:20.670 "data_size": 0 00:19:20.670 } 00:19:20.670 ] 00:19:20.670 }' 00:19:20.670 13:04:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.670 13:04:39 -- common/autotest_common.sh@10 -- # set +x 00:19:21.236 13:04:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:21.495 [2024-06-11 13:04:40.277139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:21.495 BaseBdev3 00:19:21.495 13:04:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:21.495 13:04:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:21.495 13:04:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:21.495 13:04:40 -- common/autotest_common.sh@889 -- # local i 00:19:21.495 13:04:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:21.495 13:04:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:21.495 13:04:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:21.753 13:04:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:22.011 [ 00:19:22.011 { 00:19:22.011 "name": "BaseBdev3", 00:19:22.011 "aliases": [ 00:19:22.011 "aaa9ce6d-924c-43b1-ae21-f6eed30a84af" 00:19:22.011 ], 00:19:22.011 "product_name": "Malloc disk", 00:19:22.011 "block_size": 512, 00:19:22.011 "num_blocks": 65536, 00:19:22.011 "uuid": "aaa9ce6d-924c-43b1-ae21-f6eed30a84af", 00:19:22.011 "assigned_rate_limits": { 00:19:22.011 "rw_ios_per_sec": 0, 00:19:22.011 "rw_mbytes_per_sec": 0, 00:19:22.011 "r_mbytes_per_sec": 0, 00:19:22.011 "w_mbytes_per_sec": 0 00:19:22.011 }, 00:19:22.011 "claimed": true, 00:19:22.011 "claim_type": "exclusive_write", 00:19:22.011 "zoned": false, 00:19:22.011 "supported_io_types": { 00:19:22.011 "read": true, 00:19:22.011 "write": true, 00:19:22.011 "unmap": true, 00:19:22.011 "write_zeroes": true, 00:19:22.011 "flush": true, 00:19:22.011 "reset": true, 00:19:22.011 "compare": false, 00:19:22.011 "compare_and_write": false, 00:19:22.011 "abort": true, 00:19:22.012 "nvme_admin": false, 00:19:22.012 "nvme_io": false 00:19:22.012 }, 00:19:22.012 "memory_domains": [ 00:19:22.012 { 00:19:22.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.012 "dma_device_type": 2 00:19:22.012 } 00:19:22.012 ], 00:19:22.012 "driver_specific": {} 00:19:22.012 } 00:19:22.012 ] 00:19:22.012 13:04:40 -- common/autotest_common.sh@895 -- # return 0 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.012 13:04:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.270 13:04:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.270 "name": "Existed_Raid", 00:19:22.270 "uuid": "514bb1e2-038a-45f5-9e25-9ae4c731cbd5", 00:19:22.270 "strip_size_kb": 0, 00:19:22.270 "state": "configuring", 00:19:22.270 "raid_level": "raid1", 00:19:22.270 "superblock": true, 00:19:22.270 "num_base_bdevs": 4, 00:19:22.270 "num_base_bdevs_discovered": 3, 00:19:22.270 "num_base_bdevs_operational": 4, 00:19:22.270 "base_bdevs_list": [ 00:19:22.270 { 00:19:22.270 "name": "BaseBdev1", 00:19:22.270 "uuid": "6c3d51bc-8501-4483-9bf4-3d237382fc25", 00:19:22.270 "is_configured": true, 00:19:22.270 "data_offset": 2048, 00:19:22.270 "data_size": 63488 00:19:22.270 }, 00:19:22.270 { 00:19:22.270 "name": "BaseBdev2", 00:19:22.270 "uuid": "d264349f-57f7-47b7-8f2d-d9537a667655", 00:19:22.270 "is_configured": true, 00:19:22.270 "data_offset": 2048, 00:19:22.270 "data_size": 63488 00:19:22.270 }, 00:19:22.270 { 00:19:22.270 "name": "BaseBdev3", 00:19:22.270 "uuid": "aaa9ce6d-924c-43b1-ae21-f6eed30a84af", 00:19:22.270 "is_configured": true, 00:19:22.270 "data_offset": 2048, 00:19:22.270 "data_size": 63488 00:19:22.270 }, 00:19:22.270 { 00:19:22.270 "name": "BaseBdev4", 00:19:22.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.270 "is_configured": false, 00:19:22.270 "data_offset": 0, 00:19:22.270 "data_size": 0 00:19:22.270 } 00:19:22.270 ] 00:19:22.270 }' 00:19:22.270 13:04:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.270 13:04:40 -- common/autotest_common.sh@10 -- # set +x 00:19:22.836 13:04:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:23.094 [2024-06-11 13:04:41.813992] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:23.094 [2024-06-11 13:04:41.814491] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:23.095 [2024-06-11 13:04:41.814639] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:23.095 BaseBdev4 00:19:23.095 [2024-06-11 13:04:41.814813] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:23.095 [2024-06-11 13:04:41.815296] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:23.095 [2024-06-11 13:04:41.815423] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:23.095 [2024-06-11 13:04:41.815669] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.095 13:04:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:23.095 13:04:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:23.095 13:04:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:23.095 13:04:41 -- common/autotest_common.sh@889 -- # local i 00:19:23.095 13:04:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:23.095 13:04:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:23.095 13:04:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.353 13:04:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:23.611 [ 00:19:23.611 { 00:19:23.611 "name": "BaseBdev4", 00:19:23.611 "aliases": [ 00:19:23.611 "8fd2ca80-eb9d-4c43-92ee-20332919e0c2" 00:19:23.611 ], 00:19:23.611 "product_name": "Malloc disk", 00:19:23.611 "block_size": 512, 00:19:23.611 "num_blocks": 65536, 00:19:23.611 "uuid": "8fd2ca80-eb9d-4c43-92ee-20332919e0c2", 00:19:23.611 "assigned_rate_limits": { 00:19:23.611 "rw_ios_per_sec": 0, 00:19:23.611 "rw_mbytes_per_sec": 0, 00:19:23.611 "r_mbytes_per_sec": 0, 00:19:23.611 "w_mbytes_per_sec": 0 00:19:23.611 }, 00:19:23.611 "claimed": true, 00:19:23.611 "claim_type": "exclusive_write", 00:19:23.611 "zoned": false, 00:19:23.611 "supported_io_types": { 00:19:23.611 "read": true, 00:19:23.611 "write": true, 00:19:23.611 "unmap": true, 00:19:23.611 "write_zeroes": true, 00:19:23.611 "flush": true, 00:19:23.611 "reset": true, 00:19:23.611 "compare": false, 00:19:23.611 "compare_and_write": false, 00:19:23.611 "abort": true, 00:19:23.611 "nvme_admin": false, 00:19:23.611 "nvme_io": false 00:19:23.611 }, 00:19:23.611 "memory_domains": [ 00:19:23.611 { 00:19:23.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.611 "dma_device_type": 2 00:19:23.611 } 00:19:23.611 ], 00:19:23.611 "driver_specific": {} 00:19:23.611 } 00:19:23.611 ] 00:19:23.611 13:04:42 -- common/autotest_common.sh@895 -- # return 0 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.611 13:04:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.869 13:04:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.869 "name": "Existed_Raid", 00:19:23.869 "uuid": "514bb1e2-038a-45f5-9e25-9ae4c731cbd5", 00:19:23.869 "strip_size_kb": 0, 00:19:23.869 "state": "online", 00:19:23.870 "raid_level": "raid1", 00:19:23.870 "superblock": true, 00:19:23.870 "num_base_bdevs": 4, 00:19:23.870 "num_base_bdevs_discovered": 4, 00:19:23.870 "num_base_bdevs_operational": 4, 00:19:23.870 "base_bdevs_list": [ 00:19:23.870 { 00:19:23.870 "name": "BaseBdev1", 00:19:23.870 "uuid": "6c3d51bc-8501-4483-9bf4-3d237382fc25", 00:19:23.870 "is_configured": true, 00:19:23.870 "data_offset": 2048, 00:19:23.870 "data_size": 63488 00:19:23.870 }, 00:19:23.870 { 00:19:23.870 "name": "BaseBdev2", 00:19:23.870 "uuid": "d264349f-57f7-47b7-8f2d-d9537a667655", 00:19:23.870 "is_configured": true, 00:19:23.870 "data_offset": 2048, 00:19:23.870 "data_size": 63488 00:19:23.870 }, 00:19:23.870 { 00:19:23.870 "name": "BaseBdev3", 00:19:23.870 "uuid": "aaa9ce6d-924c-43b1-ae21-f6eed30a84af", 00:19:23.870 "is_configured": true, 00:19:23.870 "data_offset": 2048, 00:19:23.870 "data_size": 63488 00:19:23.870 }, 00:19:23.870 { 00:19:23.870 "name": "BaseBdev4", 00:19:23.870 "uuid": "8fd2ca80-eb9d-4c43-92ee-20332919e0c2", 00:19:23.870 "is_configured": true, 00:19:23.870 "data_offset": 2048, 00:19:23.870 "data_size": 63488 00:19:23.870 } 00:19:23.870 ] 00:19:23.870 }' 00:19:23.870 13:04:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.870 13:04:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.436 13:04:43 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:24.694 [2024-06-11 13:04:43.338635] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.694 13:04:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.964 13:04:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.964 "name": "Existed_Raid", 00:19:24.964 "uuid": "514bb1e2-038a-45f5-9e25-9ae4c731cbd5", 00:19:24.964 "strip_size_kb": 0, 00:19:24.964 "state": "online", 00:19:24.964 "raid_level": "raid1", 00:19:24.964 "superblock": true, 00:19:24.964 "num_base_bdevs": 4, 00:19:24.964 "num_base_bdevs_discovered": 3, 00:19:24.964 "num_base_bdevs_operational": 3, 00:19:24.964 "base_bdevs_list": [ 00:19:24.964 { 00:19:24.964 "name": null, 00:19:24.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.965 "is_configured": false, 00:19:24.965 "data_offset": 2048, 00:19:24.965 "data_size": 63488 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "name": "BaseBdev2", 00:19:24.965 "uuid": "d264349f-57f7-47b7-8f2d-d9537a667655", 00:19:24.965 "is_configured": true, 00:19:24.965 "data_offset": 2048, 00:19:24.965 "data_size": 63488 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "name": "BaseBdev3", 00:19:24.965 "uuid": "aaa9ce6d-924c-43b1-ae21-f6eed30a84af", 00:19:24.965 "is_configured": true, 00:19:24.965 "data_offset": 2048, 00:19:24.965 "data_size": 63488 00:19:24.965 }, 00:19:24.965 { 00:19:24.965 "name": "BaseBdev4", 00:19:24.965 "uuid": "8fd2ca80-eb9d-4c43-92ee-20332919e0c2", 00:19:24.965 "is_configured": true, 00:19:24.965 "data_offset": 2048, 00:19:24.965 "data_size": 63488 00:19:24.965 } 00:19:24.965 ] 00:19:24.965 }' 00:19:24.965 13:04:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.965 13:04:43 -- common/autotest_common.sh@10 -- # set +x 00:19:25.547 13:04:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:25.547 13:04:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:25.547 13:04:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.547 13:04:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:25.806 13:04:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:25.806 13:04:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:25.806 13:04:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:26.064 [2024-06-11 13:04:44.763622] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:26.064 13:04:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:26.064 13:04:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:26.064 13:04:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.064 13:04:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:26.323 13:04:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:26.323 13:04:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:26.323 13:04:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:26.582 [2024-06-11 13:04:45.292517] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:26.582 13:04:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:26.582 13:04:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:26.582 13:04:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.582 13:04:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:26.841 13:04:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:26.841 13:04:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:26.841 13:04:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:27.100 [2024-06-11 13:04:45.805755] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:27.100 [2024-06-11 13:04:45.805967] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.100 [2024-06-11 13:04:45.806136] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.100 [2024-06-11 13:04:45.875285] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.100 [2024-06-11 13:04:45.875465] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:27.100 13:04:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:27.100 13:04:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:27.100 13:04:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.100 13:04:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:27.359 13:04:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:27.359 13:04:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:27.359 13:04:46 -- bdev/bdev_raid.sh@287 -- # killprocess 124448 00:19:27.359 13:04:46 -- common/autotest_common.sh@926 -- # '[' -z 124448 ']' 00:19:27.359 13:04:46 -- common/autotest_common.sh@930 -- # kill -0 124448 00:19:27.359 13:04:46 -- common/autotest_common.sh@931 -- # uname 00:19:27.359 13:04:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:27.359 13:04:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124448 00:19:27.359 killing process with pid 124448 00:19:27.359 13:04:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:27.359 13:04:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:27.359 13:04:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124448' 00:19:27.359 13:04:46 -- common/autotest_common.sh@945 -- # kill 124448 00:19:27.359 13:04:46 -- common/autotest_common.sh@950 -- # wait 124448 00:19:27.359 [2024-06-11 13:04:46.106093] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:27.359 [2024-06-11 13:04:46.106233] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:28.296 ************************************ 00:19:28.296 END TEST raid_state_function_test_sb 00:19:28.296 ************************************ 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:28.296 00:19:28.296 real 0m15.161s 00:19:28.296 user 0m27.269s 00:19:28.296 sys 0m1.724s 00:19:28.296 13:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.296 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:28.296 13:04:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:28.296 13:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.296 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:19:28.296 ************************************ 00:19:28.296 START TEST raid_superblock_test 00:19:28.296 ************************************ 00:19:28.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:28.296 13:04:47 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@357 -- # raid_pid=124930 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124930 /var/tmp/spdk-raid.sock 00:19:28.296 13:04:47 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:28.296 13:04:47 -- common/autotest_common.sh@819 -- # '[' -z 124930 ']' 00:19:28.296 13:04:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:28.296 13:04:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:28.296 13:04:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:28.296 13:04:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:28.296 13:04:47 -- common/autotest_common.sh@10 -- # set +x 00:19:28.555 [2024-06-11 13:04:47.188260] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:28.555 [2024-06-11 13:04:47.188677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124930 ] 00:19:28.555 [2024-06-11 13:04:47.356667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.814 [2024-06-11 13:04:47.575237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.073 [2024-06-11 13:04:47.771376] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.331 13:04:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:29.331 13:04:48 -- common/autotest_common.sh@852 -- # return 0 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:29.331 13:04:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:29.590 malloc1 00:19:29.590 13:04:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:29.849 [2024-06-11 13:04:48.591162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:29.849 [2024-06-11 13:04:48.591507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.849 [2024-06-11 13:04:48.591678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:29.849 [2024-06-11 13:04:48.591839] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.849 [2024-06-11 13:04:48.594449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.849 [2024-06-11 13:04:48.594631] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:29.849 pt1 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:29.849 13:04:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:30.108 malloc2 00:19:30.108 13:04:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:30.367 [2024-06-11 13:04:49.019365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:30.367 [2024-06-11 13:04:49.019697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.367 [2024-06-11 13:04:49.019864] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:30.367 [2024-06-11 13:04:49.020031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.367 [2024-06-11 13:04:49.022655] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.367 [2024-06-11 13:04:49.022846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:30.367 pt2 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:30.367 13:04:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:30.626 malloc3 00:19:30.626 13:04:49 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:30.626 [2024-06-11 13:04:49.458109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:30.626 [2024-06-11 13:04:49.458411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.626 [2024-06-11 13:04:49.458575] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:30.626 [2024-06-11 13:04:49.458753] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.626 [2024-06-11 13:04:49.461374] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.626 [2024-06-11 13:04:49.461593] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:30.626 pt3 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:30.885 malloc4 00:19:30.885 13:04:49 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:31.143 [2024-06-11 13:04:49.880599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:31.143 [2024-06-11 13:04:49.880869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.143 [2024-06-11 13:04:49.880955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:31.143 [2024-06-11 13:04:49.881286] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.143 [2024-06-11 13:04:49.883723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.143 [2024-06-11 13:04:49.883903] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:31.143 pt4 00:19:31.143 13:04:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:31.143 13:04:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:31.143 13:04:49 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:31.402 [2024-06-11 13:04:50.080765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:31.402 [2024-06-11 13:04:50.082796] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:31.402 [2024-06-11 13:04:50.082988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:31.402 [2024-06-11 13:04:50.083092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:31.402 [2024-06-11 13:04:50.083389] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:31.402 [2024-06-11 13:04:50.083535] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:31.402 [2024-06-11 13:04:50.083810] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:31.402 [2024-06-11 13:04:50.084310] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:31.402 [2024-06-11 13:04:50.084439] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:31.402 [2024-06-11 13:04:50.084699] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.402 13:04:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.661 13:04:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.661 "name": "raid_bdev1", 00:19:31.661 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:31.661 "strip_size_kb": 0, 00:19:31.661 "state": "online", 00:19:31.661 "raid_level": "raid1", 00:19:31.661 "superblock": true, 00:19:31.661 "num_base_bdevs": 4, 00:19:31.661 "num_base_bdevs_discovered": 4, 00:19:31.661 "num_base_bdevs_operational": 4, 00:19:31.661 "base_bdevs_list": [ 00:19:31.661 { 00:19:31.661 "name": "pt1", 00:19:31.661 "uuid": "264ae170-27d7-51a1-9f61-388581a014ab", 00:19:31.661 "is_configured": true, 00:19:31.661 "data_offset": 2048, 00:19:31.661 "data_size": 63488 00:19:31.661 }, 00:19:31.661 { 00:19:31.661 "name": "pt2", 00:19:31.661 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:31.661 "is_configured": true, 00:19:31.661 "data_offset": 2048, 00:19:31.661 "data_size": 63488 00:19:31.661 }, 00:19:31.661 { 00:19:31.661 "name": "pt3", 00:19:31.661 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:31.661 "is_configured": true, 00:19:31.661 "data_offset": 2048, 00:19:31.661 "data_size": 63488 00:19:31.661 }, 00:19:31.661 { 00:19:31.661 "name": "pt4", 00:19:31.661 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:31.661 "is_configured": true, 00:19:31.661 "data_offset": 2048, 00:19:31.661 "data_size": 63488 00:19:31.661 } 00:19:31.661 ] 00:19:31.661 }' 00:19:31.661 13:04:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.661 13:04:50 -- common/autotest_common.sh@10 -- # set +x 00:19:32.228 13:04:50 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:32.228 13:04:50 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:32.485 [2024-06-11 13:04:51.193129] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.485 13:04:51 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=55dc5c06-be54-4b48-8b4f-13c37a012c4c 00:19:32.485 13:04:51 -- bdev/bdev_raid.sh@380 -- # '[' -z 55dc5c06-be54-4b48-8b4f-13c37a012c4c ']' 00:19:32.485 13:04:51 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:32.742 [2024-06-11 13:04:51.444971] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.742 [2024-06-11 13:04:51.445133] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.742 [2024-06-11 13:04:51.445322] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.742 [2024-06-11 13:04:51.445572] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.742 [2024-06-11 13:04:51.445743] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:32.742 13:04:51 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.742 13:04:51 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:32.999 13:04:51 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:32.999 13:04:51 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:32.999 13:04:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:32.999 13:04:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:33.257 13:04:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:33.257 13:04:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:33.257 13:04:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:33.257 13:04:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:33.515 13:04:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:33.515 13:04:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:33.772 13:04:52 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:33.772 13:04:52 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:34.031 13:04:52 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:34.031 13:04:52 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:34.031 13:04:52 -- common/autotest_common.sh@640 -- # local es=0 00:19:34.031 13:04:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:34.031 13:04:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.031 13:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:34.031 13:04:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.031 13:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:34.031 13:04:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.031 13:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:34.031 13:04:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.031 13:04:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:34.031 13:04:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:34.289 [2024-06-11 13:04:52.929189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:34.289 [2024-06-11 13:04:52.931003] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:34.289 [2024-06-11 13:04:52.931188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:34.289 [2024-06-11 13:04:52.931377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:34.289 [2024-06-11 13:04:52.931568] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:34.289 [2024-06-11 13:04:52.931763] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:34.289 [2024-06-11 13:04:52.931904] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:34.289 [2024-06-11 13:04:52.932065] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:34.289 [2024-06-11 13:04:52.932194] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.289 [2024-06-11 13:04:52.932293] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:19:34.289 request: 00:19:34.289 { 00:19:34.289 "name": "raid_bdev1", 00:19:34.289 "raid_level": "raid1", 00:19:34.289 "base_bdevs": [ 00:19:34.289 "malloc1", 00:19:34.289 "malloc2", 00:19:34.289 "malloc3", 00:19:34.289 "malloc4" 00:19:34.289 ], 00:19:34.289 "superblock": false, 00:19:34.289 "method": "bdev_raid_create", 00:19:34.289 "req_id": 1 00:19:34.289 } 00:19:34.289 Got JSON-RPC error response 00:19:34.289 response: 00:19:34.289 { 00:19:34.289 "code": -17, 00:19:34.289 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:34.289 } 00:19:34.289 13:04:52 -- common/autotest_common.sh@643 -- # es=1 00:19:34.289 13:04:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:34.289 13:04:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:34.289 13:04:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:34.289 13:04:52 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.289 13:04:52 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.549 [2024-06-11 13:04:53.325246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.549 [2024-06-11 13:04:53.325556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.549 [2024-06-11 13:04:53.325738] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:34.549 [2024-06-11 13:04:53.325868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.549 [2024-06-11 13:04:53.328389] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.549 [2024-06-11 13:04:53.328589] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.549 [2024-06-11 13:04:53.328923] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:34.549 [2024-06-11 13:04:53.329087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:34.549 pt1 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.549 13:04:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.807 13:04:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.807 "name": "raid_bdev1", 00:19:34.807 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:34.807 "strip_size_kb": 0, 00:19:34.807 "state": "configuring", 00:19:34.807 "raid_level": "raid1", 00:19:34.807 "superblock": true, 00:19:34.807 "num_base_bdevs": 4, 00:19:34.807 "num_base_bdevs_discovered": 1, 00:19:34.807 "num_base_bdevs_operational": 4, 00:19:34.807 "base_bdevs_list": [ 00:19:34.807 { 00:19:34.807 "name": "pt1", 00:19:34.807 "uuid": "264ae170-27d7-51a1-9f61-388581a014ab", 00:19:34.807 "is_configured": true, 00:19:34.807 "data_offset": 2048, 00:19:34.807 "data_size": 63488 00:19:34.807 }, 00:19:34.807 { 00:19:34.807 "name": null, 00:19:34.807 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:34.807 "is_configured": false, 00:19:34.807 "data_offset": 2048, 00:19:34.807 "data_size": 63488 00:19:34.807 }, 00:19:34.807 { 00:19:34.807 "name": null, 00:19:34.807 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:34.807 "is_configured": false, 00:19:34.807 "data_offset": 2048, 00:19:34.807 "data_size": 63488 00:19:34.807 }, 00:19:34.807 { 00:19:34.807 "name": null, 00:19:34.807 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:34.807 "is_configured": false, 00:19:34.807 "data_offset": 2048, 00:19:34.807 "data_size": 63488 00:19:34.807 } 00:19:34.807 ] 00:19:34.807 }' 00:19:34.807 13:04:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.807 13:04:53 -- common/autotest_common.sh@10 -- # set +x 00:19:35.742 13:04:54 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:35.742 13:04:54 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.742 [2024-06-11 13:04:54.402042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.742 [2024-06-11 13:04:54.402409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.742 [2024-06-11 13:04:54.402595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:35.742 [2024-06-11 13:04:54.402708] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.742 [2024-06-11 13:04:54.403328] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.742 [2024-06-11 13:04:54.403503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.742 [2024-06-11 13:04:54.403706] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:35.742 [2024-06-11 13:04:54.403842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.742 pt2 00:19:35.742 13:04:54 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:36.001 [2024-06-11 13:04:54.634140] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.001 13:04:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.261 13:04:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.261 "name": "raid_bdev1", 00:19:36.261 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:36.261 "strip_size_kb": 0, 00:19:36.261 "state": "configuring", 00:19:36.261 "raid_level": "raid1", 00:19:36.261 "superblock": true, 00:19:36.261 "num_base_bdevs": 4, 00:19:36.261 "num_base_bdevs_discovered": 1, 00:19:36.261 "num_base_bdevs_operational": 4, 00:19:36.261 "base_bdevs_list": [ 00:19:36.261 { 00:19:36.261 "name": "pt1", 00:19:36.261 "uuid": "264ae170-27d7-51a1-9f61-388581a014ab", 00:19:36.261 "is_configured": true, 00:19:36.261 "data_offset": 2048, 00:19:36.261 "data_size": 63488 00:19:36.261 }, 00:19:36.261 { 00:19:36.261 "name": null, 00:19:36.261 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:36.261 "is_configured": false, 00:19:36.261 "data_offset": 2048, 00:19:36.261 "data_size": 63488 00:19:36.261 }, 00:19:36.261 { 00:19:36.262 "name": null, 00:19:36.262 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:36.262 "is_configured": false, 00:19:36.262 "data_offset": 2048, 00:19:36.262 "data_size": 63488 00:19:36.262 }, 00:19:36.262 { 00:19:36.262 "name": null, 00:19:36.262 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:36.262 "is_configured": false, 00:19:36.262 "data_offset": 2048, 00:19:36.262 "data_size": 63488 00:19:36.262 } 00:19:36.262 ] 00:19:36.262 }' 00:19:36.262 13:04:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.262 13:04:54 -- common/autotest_common.sh@10 -- # set +x 00:19:36.828 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:36.828 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:36.828 13:04:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.087 [2024-06-11 13:04:55.778382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.087 [2024-06-11 13:04:55.778630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.087 [2024-06-11 13:04:55.778706] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:37.087 [2024-06-11 13:04:55.778984] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.087 [2024-06-11 13:04:55.779537] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.087 [2024-06-11 13:04:55.779714] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.087 [2024-06-11 13:04:55.779915] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:37.087 [2024-06-11 13:04:55.780106] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.087 pt2 00:19:37.087 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:37.087 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:37.087 13:04:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:37.345 [2024-06-11 13:04:56.030409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:37.345 [2024-06-11 13:04:56.030663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.345 [2024-06-11 13:04:56.030735] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:37.345 [2024-06-11 13:04:56.030940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.345 [2024-06-11 13:04:56.031441] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.345 [2024-06-11 13:04:56.031654] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:37.345 [2024-06-11 13:04:56.031849] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:37.345 [2024-06-11 13:04:56.031978] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:37.345 pt3 00:19:37.345 13:04:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:37.345 13:04:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:37.345 13:04:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:37.603 [2024-06-11 13:04:56.234456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:37.603 [2024-06-11 13:04:56.234670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.603 [2024-06-11 13:04:56.234734] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:37.603 [2024-06-11 13:04:56.234934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.603 [2024-06-11 13:04:56.235467] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.603 [2024-06-11 13:04:56.235641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:37.603 [2024-06-11 13:04:56.235786] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:37.603 [2024-06-11 13:04:56.235846] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:37.603 [2024-06-11 13:04:56.236031] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:19:37.603 [2024-06-11 13:04:56.236215] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:37.603 [2024-06-11 13:04:56.236376] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:37.603 [2024-06-11 13:04:56.236824] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:19:37.603 [2024-06-11 13:04:56.236982] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:19:37.603 [2024-06-11 13:04:56.237207] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.603 pt4 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.603 13:04:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.862 13:04:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.862 "name": "raid_bdev1", 00:19:37.862 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:37.862 "strip_size_kb": 0, 00:19:37.862 "state": "online", 00:19:37.862 "raid_level": "raid1", 00:19:37.862 "superblock": true, 00:19:37.862 "num_base_bdevs": 4, 00:19:37.862 "num_base_bdevs_discovered": 4, 00:19:37.862 "num_base_bdevs_operational": 4, 00:19:37.862 "base_bdevs_list": [ 00:19:37.862 { 00:19:37.862 "name": "pt1", 00:19:37.862 "uuid": "264ae170-27d7-51a1-9f61-388581a014ab", 00:19:37.862 "is_configured": true, 00:19:37.862 "data_offset": 2048, 00:19:37.862 "data_size": 63488 00:19:37.862 }, 00:19:37.862 { 00:19:37.862 "name": "pt2", 00:19:37.862 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:37.862 "is_configured": true, 00:19:37.862 "data_offset": 2048, 00:19:37.862 "data_size": 63488 00:19:37.862 }, 00:19:37.862 { 00:19:37.862 "name": "pt3", 00:19:37.862 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:37.862 "is_configured": true, 00:19:37.862 "data_offset": 2048, 00:19:37.862 "data_size": 63488 00:19:37.862 }, 00:19:37.862 { 00:19:37.862 "name": "pt4", 00:19:37.862 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:37.862 "is_configured": true, 00:19:37.862 "data_offset": 2048, 00:19:37.862 "data_size": 63488 00:19:37.862 } 00:19:37.862 ] 00:19:37.862 }' 00:19:37.862 13:04:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.862 13:04:56 -- common/autotest_common.sh@10 -- # set +x 00:19:38.429 13:04:57 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:38.429 13:04:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:38.687 [2024-06-11 13:04:57.318898] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.687 13:04:57 -- bdev/bdev_raid.sh@430 -- # '[' 55dc5c06-be54-4b48-8b4f-13c37a012c4c '!=' 55dc5c06-be54-4b48-8b4f-13c37a012c4c ']' 00:19:38.687 13:04:57 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:38.687 13:04:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:38.687 13:04:57 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:38.687 13:04:57 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:38.945 [2024-06-11 13:04:57.578803] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.945 13:04:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.203 13:04:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:39.203 "name": "raid_bdev1", 00:19:39.203 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:39.203 "strip_size_kb": 0, 00:19:39.203 "state": "online", 00:19:39.203 "raid_level": "raid1", 00:19:39.203 "superblock": true, 00:19:39.203 "num_base_bdevs": 4, 00:19:39.203 "num_base_bdevs_discovered": 3, 00:19:39.203 "num_base_bdevs_operational": 3, 00:19:39.203 "base_bdevs_list": [ 00:19:39.203 { 00:19:39.203 "name": null, 00:19:39.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.203 "is_configured": false, 00:19:39.203 "data_offset": 2048, 00:19:39.203 "data_size": 63488 00:19:39.203 }, 00:19:39.203 { 00:19:39.203 "name": "pt2", 00:19:39.203 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:39.203 "is_configured": true, 00:19:39.203 "data_offset": 2048, 00:19:39.203 "data_size": 63488 00:19:39.203 }, 00:19:39.203 { 00:19:39.203 "name": "pt3", 00:19:39.203 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:39.203 "is_configured": true, 00:19:39.203 "data_offset": 2048, 00:19:39.203 "data_size": 63488 00:19:39.203 }, 00:19:39.203 { 00:19:39.203 "name": "pt4", 00:19:39.203 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:39.203 "is_configured": true, 00:19:39.203 "data_offset": 2048, 00:19:39.203 "data_size": 63488 00:19:39.203 } 00:19:39.203 ] 00:19:39.203 }' 00:19:39.203 13:04:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:39.203 13:04:57 -- common/autotest_common.sh@10 -- # set +x 00:19:39.769 13:04:58 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:40.027 [2024-06-11 13:04:58.690956] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.027 [2024-06-11 13:04:58.691136] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.027 [2024-06-11 13:04:58.691317] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.027 [2024-06-11 13:04:58.691503] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.027 [2024-06-11 13:04:58.691614] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:19:40.027 13:04:58 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.027 13:04:58 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:40.284 13:04:58 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:40.284 13:04:58 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:40.284 13:04:58 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:40.284 13:04:58 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:40.284 13:04:58 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:40.541 13:04:59 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:40.541 13:04:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:40.541 13:04:59 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:40.799 13:04:59 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:40.799 13:04:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:40.799 13:04:59 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:40.799 13:04:59 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:40.799 13:04:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:40.799 13:04:59 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:40.799 13:04:59 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:40.799 13:04:59 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:41.057 [2024-06-11 13:04:59.799117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:41.057 [2024-06-11 13:04:59.799313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.057 [2024-06-11 13:04:59.799378] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:41.057 [2024-06-11 13:04:59.799489] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.057 [2024-06-11 13:04:59.801508] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.057 [2024-06-11 13:04:59.801714] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:41.057 [2024-06-11 13:04:59.801966] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:41.057 [2024-06-11 13:04:59.802122] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.057 pt2 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.057 13:04:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.316 13:05:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.316 "name": "raid_bdev1", 00:19:41.316 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:41.316 "strip_size_kb": 0, 00:19:41.316 "state": "configuring", 00:19:41.316 "raid_level": "raid1", 00:19:41.316 "superblock": true, 00:19:41.316 "num_base_bdevs": 4, 00:19:41.316 "num_base_bdevs_discovered": 1, 00:19:41.316 "num_base_bdevs_operational": 3, 00:19:41.316 "base_bdevs_list": [ 00:19:41.316 { 00:19:41.316 "name": null, 00:19:41.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.316 "is_configured": false, 00:19:41.316 "data_offset": 2048, 00:19:41.316 "data_size": 63488 00:19:41.316 }, 00:19:41.316 { 00:19:41.316 "name": "pt2", 00:19:41.317 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:41.317 "is_configured": true, 00:19:41.317 "data_offset": 2048, 00:19:41.317 "data_size": 63488 00:19:41.317 }, 00:19:41.317 { 00:19:41.317 "name": null, 00:19:41.317 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:41.317 "is_configured": false, 00:19:41.317 "data_offset": 2048, 00:19:41.317 "data_size": 63488 00:19:41.317 }, 00:19:41.317 { 00:19:41.317 "name": null, 00:19:41.317 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:41.317 "is_configured": false, 00:19:41.317 "data_offset": 2048, 00:19:41.317 "data_size": 63488 00:19:41.317 } 00:19:41.317 ] 00:19:41.317 }' 00:19:41.317 13:05:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.317 13:05:00 -- common/autotest_common.sh@10 -- # set +x 00:19:41.884 13:05:00 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:41.884 13:05:00 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:41.884 13:05:00 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:42.142 [2024-06-11 13:05:00.887339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:42.142 [2024-06-11 13:05:00.887619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.142 [2024-06-11 13:05:00.887791] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:42.142 [2024-06-11 13:05:00.887933] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.142 [2024-06-11 13:05:00.888574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.142 [2024-06-11 13:05:00.888792] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:42.142 [2024-06-11 13:05:00.889060] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:42.142 [2024-06-11 13:05:00.889195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:42.142 pt3 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.142 13:05:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.401 13:05:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.401 "name": "raid_bdev1", 00:19:42.401 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:42.401 "strip_size_kb": 0, 00:19:42.401 "state": "configuring", 00:19:42.401 "raid_level": "raid1", 00:19:42.401 "superblock": true, 00:19:42.401 "num_base_bdevs": 4, 00:19:42.401 "num_base_bdevs_discovered": 2, 00:19:42.401 "num_base_bdevs_operational": 3, 00:19:42.401 "base_bdevs_list": [ 00:19:42.401 { 00:19:42.401 "name": null, 00:19:42.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.401 "is_configured": false, 00:19:42.401 "data_offset": 2048, 00:19:42.401 "data_size": 63488 00:19:42.401 }, 00:19:42.401 { 00:19:42.401 "name": "pt2", 00:19:42.401 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:42.401 "is_configured": true, 00:19:42.401 "data_offset": 2048, 00:19:42.401 "data_size": 63488 00:19:42.401 }, 00:19:42.401 { 00:19:42.401 "name": "pt3", 00:19:42.401 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:42.401 "is_configured": true, 00:19:42.401 "data_offset": 2048, 00:19:42.401 "data_size": 63488 00:19:42.401 }, 00:19:42.401 { 00:19:42.401 "name": null, 00:19:42.401 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:42.401 "is_configured": false, 00:19:42.401 "data_offset": 2048, 00:19:42.401 "data_size": 63488 00:19:42.401 } 00:19:42.401 ] 00:19:42.401 }' 00:19:42.401 13:05:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.401 13:05:01 -- common/autotest_common.sh@10 -- # set +x 00:19:42.969 13:05:01 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:42.969 13:05:01 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:42.969 13:05:01 -- bdev/bdev_raid.sh@462 -- # i=3 00:19:42.969 13:05:01 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:43.227 [2024-06-11 13:05:01.935526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:43.227 [2024-06-11 13:05:01.935771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.227 [2024-06-11 13:05:01.935929] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:43.227 [2024-06-11 13:05:01.936062] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.227 [2024-06-11 13:05:01.936670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.227 [2024-06-11 13:05:01.936837] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:43.227 [2024-06-11 13:05:01.937053] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:43.227 [2024-06-11 13:05:01.937191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:43.227 [2024-06-11 13:05:01.937490] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:19:43.227 [2024-06-11 13:05:01.937643] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:43.227 [2024-06-11 13:05:01.937933] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:43.227 [2024-06-11 13:05:01.938455] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:19:43.227 [2024-06-11 13:05:01.938588] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:19:43.227 [2024-06-11 13:05:01.938828] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.227 pt4 00:19:43.227 13:05:01 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:43.227 13:05:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:43.227 13:05:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:43.227 13:05:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:43.227 13:05:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:43.227 13:05:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:43.228 13:05:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.228 13:05:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.228 13:05:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.228 13:05:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.228 13:05:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.228 13:05:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.486 13:05:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.486 "name": "raid_bdev1", 00:19:43.486 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:43.486 "strip_size_kb": 0, 00:19:43.486 "state": "online", 00:19:43.486 "raid_level": "raid1", 00:19:43.486 "superblock": true, 00:19:43.486 "num_base_bdevs": 4, 00:19:43.486 "num_base_bdevs_discovered": 3, 00:19:43.486 "num_base_bdevs_operational": 3, 00:19:43.486 "base_bdevs_list": [ 00:19:43.486 { 00:19:43.486 "name": null, 00:19:43.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.486 "is_configured": false, 00:19:43.486 "data_offset": 2048, 00:19:43.486 "data_size": 63488 00:19:43.486 }, 00:19:43.486 { 00:19:43.486 "name": "pt2", 00:19:43.486 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:43.486 "is_configured": true, 00:19:43.486 "data_offset": 2048, 00:19:43.486 "data_size": 63488 00:19:43.486 }, 00:19:43.486 { 00:19:43.486 "name": "pt3", 00:19:43.486 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:43.486 "is_configured": true, 00:19:43.486 "data_offset": 2048, 00:19:43.486 "data_size": 63488 00:19:43.486 }, 00:19:43.486 { 00:19:43.486 "name": "pt4", 00:19:43.486 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:43.486 "is_configured": true, 00:19:43.486 "data_offset": 2048, 00:19:43.486 "data_size": 63488 00:19:43.486 } 00:19:43.486 ] 00:19:43.486 }' 00:19:43.486 13:05:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.486 13:05:02 -- common/autotest_common.sh@10 -- # set +x 00:19:44.052 13:05:02 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:19:44.052 13:05:02 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:44.310 [2024-06-11 13:05:03.111764] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.310 [2024-06-11 13:05:03.111996] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.310 [2024-06-11 13:05:03.112193] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.310 [2024-06-11 13:05:03.112416] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.310 [2024-06-11 13:05:03.112538] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:19:44.310 13:05:03 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.310 13:05:03 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:19:44.568 13:05:03 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:19:44.568 13:05:03 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:19:44.569 13:05:03 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:44.827 [2024-06-11 13:05:03.543816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:44.827 [2024-06-11 13:05:03.544066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.827 [2024-06-11 13:05:03.544227] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:44.827 [2024-06-11 13:05:03.544354] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.827 [2024-06-11 13:05:03.546970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.827 [2024-06-11 13:05:03.547174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:44.827 [2024-06-11 13:05:03.547418] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:44.827 [2024-06-11 13:05:03.547586] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:44.827 pt1 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.827 13:05:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.085 13:05:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.085 "name": "raid_bdev1", 00:19:45.085 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:45.085 "strip_size_kb": 0, 00:19:45.085 "state": "configuring", 00:19:45.085 "raid_level": "raid1", 00:19:45.085 "superblock": true, 00:19:45.085 "num_base_bdevs": 4, 00:19:45.085 "num_base_bdevs_discovered": 1, 00:19:45.085 "num_base_bdevs_operational": 4, 00:19:45.085 "base_bdevs_list": [ 00:19:45.085 { 00:19:45.085 "name": "pt1", 00:19:45.085 "uuid": "264ae170-27d7-51a1-9f61-388581a014ab", 00:19:45.085 "is_configured": true, 00:19:45.085 "data_offset": 2048, 00:19:45.085 "data_size": 63488 00:19:45.085 }, 00:19:45.085 { 00:19:45.085 "name": null, 00:19:45.085 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:45.085 "is_configured": false, 00:19:45.085 "data_offset": 2048, 00:19:45.085 "data_size": 63488 00:19:45.085 }, 00:19:45.085 { 00:19:45.085 "name": null, 00:19:45.085 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:45.085 "is_configured": false, 00:19:45.085 "data_offset": 2048, 00:19:45.085 "data_size": 63488 00:19:45.085 }, 00:19:45.085 { 00:19:45.085 "name": null, 00:19:45.085 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:45.085 "is_configured": false, 00:19:45.085 "data_offset": 2048, 00:19:45.085 "data_size": 63488 00:19:45.085 } 00:19:45.085 ] 00:19:45.085 }' 00:19:45.085 13:05:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.085 13:05:03 -- common/autotest_common.sh@10 -- # set +x 00:19:45.650 13:05:04 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:19:45.650 13:05:04 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:45.650 13:05:04 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:45.907 13:05:04 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:45.907 13:05:04 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:45.907 13:05:04 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:46.165 13:05:04 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:46.165 13:05:04 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:46.165 13:05:04 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@489 -- # i=3 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:46.423 [2024-06-11 13:05:05.218015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:46.423 [2024-06-11 13:05:05.218305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.423 [2024-06-11 13:05:05.218375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:19:46.423 [2024-06-11 13:05:05.218581] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.423 [2024-06-11 13:05:05.219113] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.423 [2024-06-11 13:05:05.219284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:46.423 [2024-06-11 13:05:05.219485] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:46.423 [2024-06-11 13:05:05.219598] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:46.423 [2024-06-11 13:05:05.219685] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:46.423 [2024-06-11 13:05:05.219744] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:19:46.423 [2024-06-11 13:05:05.219986] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:46.423 pt4 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.423 13:05:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.696 13:05:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.696 "name": "raid_bdev1", 00:19:46.696 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:46.696 "strip_size_kb": 0, 00:19:46.696 "state": "configuring", 00:19:46.696 "raid_level": "raid1", 00:19:46.696 "superblock": true, 00:19:46.696 "num_base_bdevs": 4, 00:19:46.696 "num_base_bdevs_discovered": 1, 00:19:46.696 "num_base_bdevs_operational": 3, 00:19:46.696 "base_bdevs_list": [ 00:19:46.696 { 00:19:46.696 "name": null, 00:19:46.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.696 "is_configured": false, 00:19:46.696 "data_offset": 2048, 00:19:46.696 "data_size": 63488 00:19:46.696 }, 00:19:46.696 { 00:19:46.696 "name": null, 00:19:46.696 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:46.696 "is_configured": false, 00:19:46.696 "data_offset": 2048, 00:19:46.696 "data_size": 63488 00:19:46.696 }, 00:19:46.696 { 00:19:46.696 "name": null, 00:19:46.696 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:46.696 "is_configured": false, 00:19:46.696 "data_offset": 2048, 00:19:46.696 "data_size": 63488 00:19:46.696 }, 00:19:46.696 { 00:19:46.696 "name": "pt4", 00:19:46.696 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:46.696 "is_configured": true, 00:19:46.696 "data_offset": 2048, 00:19:46.696 "data_size": 63488 00:19:46.696 } 00:19:46.696 ] 00:19:46.696 }' 00:19:46.696 13:05:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.696 13:05:05 -- common/autotest_common.sh@10 -- # set +x 00:19:47.291 13:05:06 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:19:47.291 13:05:06 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:47.291 13:05:06 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:47.549 [2024-06-11 13:05:06.274442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:47.549 [2024-06-11 13:05:06.274716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.549 [2024-06-11 13:05:06.274887] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:19:47.549 [2024-06-11 13:05:06.275008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.549 [2024-06-11 13:05:06.275619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.549 [2024-06-11 13:05:06.275807] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:47.549 [2024-06-11 13:05:06.276020] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:47.549 [2024-06-11 13:05:06.276140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:47.549 pt2 00:19:47.549 13:05:06 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:47.549 13:05:06 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:47.549 13:05:06 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:47.808 [2024-06-11 13:05:06.458403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:47.808 [2024-06-11 13:05:06.458617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.808 [2024-06-11 13:05:06.458679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:19:47.808 [2024-06-11 13:05:06.458799] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.808 [2024-06-11 13:05:06.459232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.808 [2024-06-11 13:05:06.459407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:47.808 [2024-06-11 13:05:06.459589] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:47.808 [2024-06-11 13:05:06.459698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:47.808 [2024-06-11 13:05:06.459867] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:19:47.808 [2024-06-11 13:05:06.459967] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:47.808 [2024-06-11 13:05:06.460128] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:19:47.808 [2024-06-11 13:05:06.460642] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:19:47.808 [2024-06-11 13:05:06.460761] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:19:47.808 [2024-06-11 13:05:06.461018] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.808 pt3 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.808 13:05:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.066 13:05:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.066 "name": "raid_bdev1", 00:19:48.066 "uuid": "55dc5c06-be54-4b48-8b4f-13c37a012c4c", 00:19:48.066 "strip_size_kb": 0, 00:19:48.066 "state": "online", 00:19:48.066 "raid_level": "raid1", 00:19:48.066 "superblock": true, 00:19:48.066 "num_base_bdevs": 4, 00:19:48.066 "num_base_bdevs_discovered": 3, 00:19:48.066 "num_base_bdevs_operational": 3, 00:19:48.066 "base_bdevs_list": [ 00:19:48.066 { 00:19:48.066 "name": null, 00:19:48.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.067 "is_configured": false, 00:19:48.067 "data_offset": 2048, 00:19:48.067 "data_size": 63488 00:19:48.067 }, 00:19:48.067 { 00:19:48.067 "name": "pt2", 00:19:48.067 "uuid": "6a53ab4e-59fc-5317-b2e4-9a2a3110e360", 00:19:48.067 "is_configured": true, 00:19:48.067 "data_offset": 2048, 00:19:48.067 "data_size": 63488 00:19:48.067 }, 00:19:48.067 { 00:19:48.067 "name": "pt3", 00:19:48.067 "uuid": "976d9938-d87d-569d-a853-541b2ac202ea", 00:19:48.067 "is_configured": true, 00:19:48.067 "data_offset": 2048, 00:19:48.067 "data_size": 63488 00:19:48.067 }, 00:19:48.067 { 00:19:48.067 "name": "pt4", 00:19:48.067 "uuid": "ecc71154-7627-51f6-b013-367837f19e8f", 00:19:48.067 "is_configured": true, 00:19:48.067 "data_offset": 2048, 00:19:48.067 "data_size": 63488 00:19:48.067 } 00:19:48.067 ] 00:19:48.067 }' 00:19:48.067 13:05:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.067 13:05:06 -- common/autotest_common.sh@10 -- # set +x 00:19:48.634 13:05:07 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:48.634 13:05:07 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:19:48.892 [2024-06-11 13:05:07.522763] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.892 13:05:07 -- bdev/bdev_raid.sh@506 -- # '[' 55dc5c06-be54-4b48-8b4f-13c37a012c4c '!=' 55dc5c06-be54-4b48-8b4f-13c37a012c4c ']' 00:19:48.892 13:05:07 -- bdev/bdev_raid.sh@511 -- # killprocess 124930 00:19:48.892 13:05:07 -- common/autotest_common.sh@926 -- # '[' -z 124930 ']' 00:19:48.892 13:05:07 -- common/autotest_common.sh@930 -- # kill -0 124930 00:19:48.892 13:05:07 -- common/autotest_common.sh@931 -- # uname 00:19:48.892 13:05:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:48.893 13:05:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124930 00:19:48.893 killing process with pid 124930 00:19:48.893 13:05:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:48.893 13:05:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:48.893 13:05:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124930' 00:19:48.893 13:05:07 -- common/autotest_common.sh@945 -- # kill 124930 00:19:48.893 13:05:07 -- common/autotest_common.sh@950 -- # wait 124930 00:19:48.893 [2024-06-11 13:05:07.556709] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:48.893 [2024-06-11 13:05:07.556782] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.893 [2024-06-11 13:05:07.556895] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.893 [2024-06-11 13:05:07.556910] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:19:49.151 [2024-06-11 13:05:07.846249] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.087 ************************************ 00:19:50.087 END TEST raid_superblock_test 00:19:50.087 ************************************ 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:50.087 00:19:50.087 real 0m21.694s 00:19:50.087 user 0m40.291s 00:19:50.087 sys 0m2.326s 00:19:50.087 13:05:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.087 13:05:08 -- common/autotest_common.sh@10 -- # set +x 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:19:50.087 13:05:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:50.087 13:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.087 13:05:08 -- common/autotest_common.sh@10 -- # set +x 00:19:50.087 ************************************ 00:19:50.087 START TEST raid_rebuild_test 00:19:50.087 ************************************ 00:19:50.087 13:05:08 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@544 -- # raid_pid=125632 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125632 /var/tmp/spdk-raid.sock 00:19:50.087 13:05:08 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:50.087 13:05:08 -- common/autotest_common.sh@819 -- # '[' -z 125632 ']' 00:19:50.087 13:05:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:50.087 13:05:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:50.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:50.087 13:05:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:50.087 13:05:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:50.087 13:05:08 -- common/autotest_common.sh@10 -- # set +x 00:19:50.346 [2024-06-11 13:05:08.944468] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:50.346 [2024-06-11 13:05:08.944843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125632 ] 00:19:50.346 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:50.346 Zero copy mechanism will not be used. 00:19:50.346 [2024-06-11 13:05:09.099927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.605 [2024-06-11 13:05:09.333722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.864 [2024-06-11 13:05:09.511680] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.122 13:05:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:51.122 13:05:09 -- common/autotest_common.sh@852 -- # return 0 00:19:51.122 13:05:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:51.122 13:05:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:51.122 13:05:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:51.381 BaseBdev1 00:19:51.381 13:05:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:51.381 13:05:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:51.381 13:05:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:51.639 BaseBdev2 00:19:51.639 13:05:10 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:51.898 spare_malloc 00:19:51.898 13:05:10 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:52.157 spare_delay 00:19:52.157 13:05:10 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:52.416 [2024-06-11 13:05:11.004202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:52.416 [2024-06-11 13:05:11.004421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.416 [2024-06-11 13:05:11.004493] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:52.416 [2024-06-11 13:05:11.004813] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.416 [2024-06-11 13:05:11.007180] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.416 [2024-06-11 13:05:11.007340] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:52.416 spare 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:52.416 [2024-06-11 13:05:11.196231] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.416 [2024-06-11 13:05:11.198230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.416 [2024-06-11 13:05:11.198444] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:19:52.416 [2024-06-11 13:05:11.198485] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:52.416 [2024-06-11 13:05:11.198708] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:52.416 [2024-06-11 13:05:11.199156] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:19:52.416 [2024-06-11 13:05:11.199271] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:19:52.416 [2024-06-11 13:05:11.199511] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.416 13:05:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.680 13:05:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.680 "name": "raid_bdev1", 00:19:52.680 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:19:52.680 "strip_size_kb": 0, 00:19:52.680 "state": "online", 00:19:52.680 "raid_level": "raid1", 00:19:52.680 "superblock": false, 00:19:52.680 "num_base_bdevs": 2, 00:19:52.680 "num_base_bdevs_discovered": 2, 00:19:52.680 "num_base_bdevs_operational": 2, 00:19:52.680 "base_bdevs_list": [ 00:19:52.680 { 00:19:52.680 "name": "BaseBdev1", 00:19:52.680 "uuid": "eff70866-8c54-477b-aa89-47bd001abc62", 00:19:52.680 "is_configured": true, 00:19:52.680 "data_offset": 0, 00:19:52.680 "data_size": 65536 00:19:52.680 }, 00:19:52.680 { 00:19:52.680 "name": "BaseBdev2", 00:19:52.680 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:19:52.680 "is_configured": true, 00:19:52.680 "data_offset": 0, 00:19:52.680 "data_size": 65536 00:19:52.680 } 00:19:52.680 ] 00:19:52.680 }' 00:19:52.680 13:05:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.680 13:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:53.247 13:05:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:53.247 13:05:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:53.505 [2024-06-11 13:05:12.296735] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.505 13:05:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:53.505 13:05:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.505 13:05:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:53.762 13:05:12 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:53.762 13:05:12 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:53.762 13:05:12 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:53.762 13:05:12 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@12 -- # local i 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:53.762 13:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:54.019 [2024-06-11 13:05:12.712594] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:54.019 /dev/nbd0 00:19:54.019 13:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:54.019 13:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:54.019 13:05:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:54.019 13:05:12 -- common/autotest_common.sh@857 -- # local i 00:19:54.019 13:05:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:54.019 13:05:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:54.019 13:05:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:54.020 13:05:12 -- common/autotest_common.sh@861 -- # break 00:19:54.020 13:05:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:54.020 13:05:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:54.020 13:05:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.020 1+0 records in 00:19:54.020 1+0 records out 00:19:54.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660543 s, 6.2 MB/s 00:19:54.020 13:05:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.020 13:05:12 -- common/autotest_common.sh@874 -- # size=4096 00:19:54.020 13:05:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.020 13:05:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:54.020 13:05:12 -- common/autotest_common.sh@877 -- # return 0 00:19:54.020 13:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:54.020 13:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.020 13:05:12 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:54.020 13:05:12 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:54.020 13:05:12 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:59.285 65536+0 records in 00:19:59.285 65536+0 records out 00:19:59.285 33554432 bytes (34 MB, 32 MiB) copied, 4.65328 s, 7.2 MB/s 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@51 -- # local i 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.285 [2024-06-11 13:05:17.669419] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@41 -- # break 00:19:59.285 13:05:17 -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:59.285 [2024-06-11 13:05:17.841164] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.285 13:05:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.285 13:05:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.285 "name": "raid_bdev1", 00:19:59.285 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:19:59.285 "strip_size_kb": 0, 00:19:59.285 "state": "online", 00:19:59.285 "raid_level": "raid1", 00:19:59.285 "superblock": false, 00:19:59.285 "num_base_bdevs": 2, 00:19:59.285 "num_base_bdevs_discovered": 1, 00:19:59.285 "num_base_bdevs_operational": 1, 00:19:59.285 "base_bdevs_list": [ 00:19:59.285 { 00:19:59.285 "name": null, 00:19:59.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.285 "is_configured": false, 00:19:59.285 "data_offset": 0, 00:19:59.285 "data_size": 65536 00:19:59.285 }, 00:19:59.285 { 00:19:59.285 "name": "BaseBdev2", 00:19:59.285 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:19:59.285 "is_configured": true, 00:19:59.285 "data_offset": 0, 00:19:59.285 "data_size": 65536 00:19:59.285 } 00:19:59.285 ] 00:19:59.285 }' 00:19:59.285 13:05:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.285 13:05:18 -- common/autotest_common.sh@10 -- # set +x 00:20:00.221 13:05:18 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:00.221 [2024-06-11 13:05:18.981409] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:00.221 [2024-06-11 13:05:18.981505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.221 [2024-06-11 13:05:18.994398] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:20:00.221 [2024-06-11 13:05:18.996309] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.221 13:05:18 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:01.596 "name": "raid_bdev1", 00:20:01.596 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:01.596 "strip_size_kb": 0, 00:20:01.596 "state": "online", 00:20:01.596 "raid_level": "raid1", 00:20:01.596 "superblock": false, 00:20:01.596 "num_base_bdevs": 2, 00:20:01.596 "num_base_bdevs_discovered": 2, 00:20:01.596 "num_base_bdevs_operational": 2, 00:20:01.596 "process": { 00:20:01.596 "type": "rebuild", 00:20:01.596 "target": "spare", 00:20:01.596 "progress": { 00:20:01.596 "blocks": 24576, 00:20:01.596 "percent": 37 00:20:01.596 } 00:20:01.596 }, 00:20:01.596 "base_bdevs_list": [ 00:20:01.596 { 00:20:01.596 "name": "spare", 00:20:01.596 "uuid": "c7c444ac-a240-5f51-9c57-43c3ff3d7e8e", 00:20:01.596 "is_configured": true, 00:20:01.596 "data_offset": 0, 00:20:01.596 "data_size": 65536 00:20:01.596 }, 00:20:01.596 { 00:20:01.596 "name": "BaseBdev2", 00:20:01.596 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:01.596 "is_configured": true, 00:20:01.596 "data_offset": 0, 00:20:01.596 "data_size": 65536 00:20:01.596 } 00:20:01.596 ] 00:20:01.596 }' 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.596 13:05:20 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:01.853 [2024-06-11 13:05:20.602842] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.853 [2024-06-11 13:05:20.604768] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:01.853 [2024-06-11 13:05:20.604903] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.853 13:05:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.111 13:05:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.111 "name": "raid_bdev1", 00:20:02.111 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:02.111 "strip_size_kb": 0, 00:20:02.111 "state": "online", 00:20:02.111 "raid_level": "raid1", 00:20:02.111 "superblock": false, 00:20:02.111 "num_base_bdevs": 2, 00:20:02.111 "num_base_bdevs_discovered": 1, 00:20:02.111 "num_base_bdevs_operational": 1, 00:20:02.111 "base_bdevs_list": [ 00:20:02.111 { 00:20:02.112 "name": null, 00:20:02.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.112 "is_configured": false, 00:20:02.112 "data_offset": 0, 00:20:02.112 "data_size": 65536 00:20:02.112 }, 00:20:02.112 { 00:20:02.112 "name": "BaseBdev2", 00:20:02.112 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:02.112 "is_configured": true, 00:20:02.112 "data_offset": 0, 00:20:02.112 "data_size": 65536 00:20:02.112 } 00:20:02.112 ] 00:20:02.112 }' 00:20:02.112 13:05:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.112 13:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:03.046 "name": "raid_bdev1", 00:20:03.046 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:03.046 "strip_size_kb": 0, 00:20:03.046 "state": "online", 00:20:03.046 "raid_level": "raid1", 00:20:03.046 "superblock": false, 00:20:03.046 "num_base_bdevs": 2, 00:20:03.046 "num_base_bdevs_discovered": 1, 00:20:03.046 "num_base_bdevs_operational": 1, 00:20:03.046 "base_bdevs_list": [ 00:20:03.046 { 00:20:03.046 "name": null, 00:20:03.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.046 "is_configured": false, 00:20:03.046 "data_offset": 0, 00:20:03.046 "data_size": 65536 00:20:03.046 }, 00:20:03.046 { 00:20:03.046 "name": "BaseBdev2", 00:20:03.046 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:03.046 "is_configured": true, 00:20:03.046 "data_offset": 0, 00:20:03.046 "data_size": 65536 00:20:03.046 } 00:20:03.046 ] 00:20:03.046 }' 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:03.046 13:05:21 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:03.304 [2024-06-11 13:05:22.058297] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:03.304 [2024-06-11 13:05:22.058362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:03.304 [2024-06-11 13:05:22.071253] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:20:03.304 [2024-06-11 13:05:22.073226] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.304 13:05:22 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:04.679 "name": "raid_bdev1", 00:20:04.679 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:04.679 "strip_size_kb": 0, 00:20:04.679 "state": "online", 00:20:04.679 "raid_level": "raid1", 00:20:04.679 "superblock": false, 00:20:04.679 "num_base_bdevs": 2, 00:20:04.679 "num_base_bdevs_discovered": 2, 00:20:04.679 "num_base_bdevs_operational": 2, 00:20:04.679 "process": { 00:20:04.679 "type": "rebuild", 00:20:04.679 "target": "spare", 00:20:04.679 "progress": { 00:20:04.679 "blocks": 24576, 00:20:04.679 "percent": 37 00:20:04.679 } 00:20:04.679 }, 00:20:04.679 "base_bdevs_list": [ 00:20:04.679 { 00:20:04.679 "name": "spare", 00:20:04.679 "uuid": "c7c444ac-a240-5f51-9c57-43c3ff3d7e8e", 00:20:04.679 "is_configured": true, 00:20:04.679 "data_offset": 0, 00:20:04.679 "data_size": 65536 00:20:04.679 }, 00:20:04.679 { 00:20:04.679 "name": "BaseBdev2", 00:20:04.679 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:04.679 "is_configured": true, 00:20:04.679 "data_offset": 0, 00:20:04.679 "data_size": 65536 00:20:04.679 } 00:20:04.679 ] 00:20:04.679 }' 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@657 -- # local timeout=395 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.679 13:05:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.938 13:05:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:04.938 "name": "raid_bdev1", 00:20:04.938 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:04.938 "strip_size_kb": 0, 00:20:04.938 "state": "online", 00:20:04.938 "raid_level": "raid1", 00:20:04.938 "superblock": false, 00:20:04.938 "num_base_bdevs": 2, 00:20:04.938 "num_base_bdevs_discovered": 2, 00:20:04.938 "num_base_bdevs_operational": 2, 00:20:04.938 "process": { 00:20:04.938 "type": "rebuild", 00:20:04.938 "target": "spare", 00:20:04.938 "progress": { 00:20:04.938 "blocks": 30720, 00:20:04.938 "percent": 46 00:20:04.938 } 00:20:04.938 }, 00:20:04.938 "base_bdevs_list": [ 00:20:04.938 { 00:20:04.938 "name": "spare", 00:20:04.938 "uuid": "c7c444ac-a240-5f51-9c57-43c3ff3d7e8e", 00:20:04.938 "is_configured": true, 00:20:04.938 "data_offset": 0, 00:20:04.938 "data_size": 65536 00:20:04.938 }, 00:20:04.938 { 00:20:04.938 "name": "BaseBdev2", 00:20:04.938 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:04.938 "is_configured": true, 00:20:04.938 "data_offset": 0, 00:20:04.938 "data_size": 65536 00:20:04.938 } 00:20:04.938 ] 00:20:04.938 }' 00:20:04.938 13:05:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:04.938 13:05:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.938 13:05:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:04.938 13:05:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.938 13:05:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:06.315 13:05:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:06.315 13:05:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.315 13:05:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:06.315 13:05:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:06.315 13:05:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:06.315 13:05:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:06.315 13:05:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.315 13:05:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.315 13:05:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:06.315 "name": "raid_bdev1", 00:20:06.315 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:06.315 "strip_size_kb": 0, 00:20:06.315 "state": "online", 00:20:06.315 "raid_level": "raid1", 00:20:06.315 "superblock": false, 00:20:06.315 "num_base_bdevs": 2, 00:20:06.315 "num_base_bdevs_discovered": 2, 00:20:06.315 "num_base_bdevs_operational": 2, 00:20:06.315 "process": { 00:20:06.315 "type": "rebuild", 00:20:06.315 "target": "spare", 00:20:06.315 "progress": { 00:20:06.315 "blocks": 59392, 00:20:06.315 "percent": 90 00:20:06.315 } 00:20:06.315 }, 00:20:06.315 "base_bdevs_list": [ 00:20:06.315 { 00:20:06.315 "name": "spare", 00:20:06.315 "uuid": "c7c444ac-a240-5f51-9c57-43c3ff3d7e8e", 00:20:06.315 "is_configured": true, 00:20:06.315 "data_offset": 0, 00:20:06.315 "data_size": 65536 00:20:06.315 }, 00:20:06.315 { 00:20:06.315 "name": "BaseBdev2", 00:20:06.315 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:06.315 "is_configured": true, 00:20:06.315 "data_offset": 0, 00:20:06.315 "data_size": 65536 00:20:06.315 } 00:20:06.315 ] 00:20:06.315 }' 00:20:06.315 13:05:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:06.315 13:05:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.315 13:05:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:06.315 13:05:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.315 13:05:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:06.574 [2024-06-11 13:05:25.289891] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:06.574 [2024-06-11 13:05:25.289959] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:06.574 [2024-06-11 13:05:25.290035] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.508 13:05:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:07.508 13:05:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.508 13:05:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:07.508 13:05:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:07.509 13:05:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:07.509 13:05:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:07.509 13:05:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.509 13:05:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.766 13:05:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:07.766 "name": "raid_bdev1", 00:20:07.766 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:07.766 "strip_size_kb": 0, 00:20:07.766 "state": "online", 00:20:07.766 "raid_level": "raid1", 00:20:07.766 "superblock": false, 00:20:07.767 "num_base_bdevs": 2, 00:20:07.767 "num_base_bdevs_discovered": 2, 00:20:07.767 "num_base_bdevs_operational": 2, 00:20:07.767 "base_bdevs_list": [ 00:20:07.767 { 00:20:07.767 "name": "spare", 00:20:07.767 "uuid": "c7c444ac-a240-5f51-9c57-43c3ff3d7e8e", 00:20:07.767 "is_configured": true, 00:20:07.767 "data_offset": 0, 00:20:07.767 "data_size": 65536 00:20:07.767 }, 00:20:07.767 { 00:20:07.767 "name": "BaseBdev2", 00:20:07.767 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:07.767 "is_configured": true, 00:20:07.767 "data_offset": 0, 00:20:07.767 "data_size": 65536 00:20:07.767 } 00:20:07.767 ] 00:20:07.767 }' 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@660 -- # break 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.767 13:05:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.025 "name": "raid_bdev1", 00:20:08.025 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:08.025 "strip_size_kb": 0, 00:20:08.025 "state": "online", 00:20:08.025 "raid_level": "raid1", 00:20:08.025 "superblock": false, 00:20:08.025 "num_base_bdevs": 2, 00:20:08.025 "num_base_bdevs_discovered": 2, 00:20:08.025 "num_base_bdevs_operational": 2, 00:20:08.025 "base_bdevs_list": [ 00:20:08.025 { 00:20:08.025 "name": "spare", 00:20:08.025 "uuid": "c7c444ac-a240-5f51-9c57-43c3ff3d7e8e", 00:20:08.025 "is_configured": true, 00:20:08.025 "data_offset": 0, 00:20:08.025 "data_size": 65536 00:20:08.025 }, 00:20:08.025 { 00:20:08.025 "name": "BaseBdev2", 00:20:08.025 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:08.025 "is_configured": true, 00:20:08.025 "data_offset": 0, 00:20:08.025 "data_size": 65536 00:20:08.025 } 00:20:08.025 ] 00:20:08.025 }' 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.025 13:05:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.284 13:05:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.284 "name": "raid_bdev1", 00:20:08.284 "uuid": "580eb523-3d7a-4157-8c42-3a940024d3d6", 00:20:08.284 "strip_size_kb": 0, 00:20:08.284 "state": "online", 00:20:08.284 "raid_level": "raid1", 00:20:08.284 "superblock": false, 00:20:08.284 "num_base_bdevs": 2, 00:20:08.284 "num_base_bdevs_discovered": 2, 00:20:08.284 "num_base_bdevs_operational": 2, 00:20:08.284 "base_bdevs_list": [ 00:20:08.284 { 00:20:08.284 "name": "spare", 00:20:08.284 "uuid": "c7c444ac-a240-5f51-9c57-43c3ff3d7e8e", 00:20:08.284 "is_configured": true, 00:20:08.284 "data_offset": 0, 00:20:08.284 "data_size": 65536 00:20:08.284 }, 00:20:08.284 { 00:20:08.284 "name": "BaseBdev2", 00:20:08.284 "uuid": "3dd8970f-6286-4c6c-b276-6f7ae13ecc91", 00:20:08.284 "is_configured": true, 00:20:08.284 "data_offset": 0, 00:20:08.284 "data_size": 65536 00:20:08.284 } 00:20:08.284 ] 00:20:08.284 }' 00:20:08.284 13:05:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.284 13:05:27 -- common/autotest_common.sh@10 -- # set +x 00:20:08.850 13:05:27 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:09.109 [2024-06-11 13:05:27.873351] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.109 [2024-06-11 13:05:27.873383] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.109 [2024-06-11 13:05:27.873475] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.109 [2024-06-11 13:05:27.873538] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.109 [2024-06-11 13:05:27.873549] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:20:09.109 13:05:27 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.109 13:05:27 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:09.367 13:05:28 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:09.367 13:05:28 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:09.367 13:05:28 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@12 -- # local i 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:09.367 13:05:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:09.624 /dev/nbd0 00:20:09.624 13:05:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:09.624 13:05:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:09.624 13:05:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:09.624 13:05:28 -- common/autotest_common.sh@857 -- # local i 00:20:09.624 13:05:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:09.624 13:05:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:09.624 13:05:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:09.624 13:05:28 -- common/autotest_common.sh@861 -- # break 00:20:09.624 13:05:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:09.624 13:05:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:09.624 13:05:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.624 1+0 records in 00:20:09.624 1+0 records out 00:20:09.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411736 s, 9.9 MB/s 00:20:09.624 13:05:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.624 13:05:28 -- common/autotest_common.sh@874 -- # size=4096 00:20:09.624 13:05:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.624 13:05:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:09.624 13:05:28 -- common/autotest_common.sh@877 -- # return 0 00:20:09.624 13:05:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.624 13:05:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:09.624 13:05:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:09.882 /dev/nbd1 00:20:09.882 13:05:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:09.882 13:05:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:09.882 13:05:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:09.882 13:05:28 -- common/autotest_common.sh@857 -- # local i 00:20:09.882 13:05:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:09.882 13:05:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:09.882 13:05:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:09.882 13:05:28 -- common/autotest_common.sh@861 -- # break 00:20:09.882 13:05:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:09.882 13:05:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:09.882 13:05:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.882 1+0 records in 00:20:09.882 1+0 records out 00:20:09.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450072 s, 9.1 MB/s 00:20:09.882 13:05:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.882 13:05:28 -- common/autotest_common.sh@874 -- # size=4096 00:20:09.882 13:05:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.882 13:05:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:09.882 13:05:28 -- common/autotest_common.sh@877 -- # return 0 00:20:09.882 13:05:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.882 13:05:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:09.882 13:05:28 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:10.140 13:05:28 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@51 -- # local i 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:10.140 13:05:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:10.397 13:05:29 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:10.397 13:05:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.397 13:05:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:10.397 13:05:29 -- bdev/nbd_common.sh@41 -- # break 00:20:10.397 13:05:29 -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.397 13:05:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.397 13:05:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@41 -- # break 00:20:10.668 13:05:29 -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.668 13:05:29 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:10.668 13:05:29 -- bdev/bdev_raid.sh@709 -- # killprocess 125632 00:20:10.668 13:05:29 -- common/autotest_common.sh@926 -- # '[' -z 125632 ']' 00:20:10.668 13:05:29 -- common/autotest_common.sh@930 -- # kill -0 125632 00:20:10.668 13:05:29 -- common/autotest_common.sh@931 -- # uname 00:20:10.668 13:05:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.668 13:05:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125632 00:20:10.668 13:05:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:10.668 13:05:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:10.668 13:05:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125632' 00:20:10.668 killing process with pid 125632 00:20:10.668 13:05:29 -- common/autotest_common.sh@945 -- # kill 125632 00:20:10.668 Received shutdown signal, test time was about 60.000000 seconds 00:20:10.668 00:20:10.668 Latency(us) 00:20:10.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.668 =================================================================================================================== 00:20:10.668 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:10.668 13:05:29 -- common/autotest_common.sh@950 -- # wait 125632 00:20:10.668 [2024-06-11 13:05:29.423926] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:10.939 [2024-06-11 13:05:29.619256] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:11.875 ************************************ 00:20:11.875 END TEST raid_rebuild_test 00:20:11.875 ************************************ 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:11.875 00:20:11.875 real 0m21.688s 00:20:11.875 user 0m29.923s 00:20:11.875 sys 0m3.927s 00:20:11.875 13:05:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.875 13:05:30 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:11.875 13:05:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:11.875 13:05:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:11.875 13:05:30 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 ************************************ 00:20:11.875 START TEST raid_rebuild_test_sb 00:20:11.875 ************************************ 00:20:11.875 13:05:30 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=126210 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126210 /var/tmp/spdk-raid.sock 00:20:11.875 13:05:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:11.875 13:05:30 -- common/autotest_common.sh@819 -- # '[' -z 126210 ']' 00:20:11.875 13:05:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:11.875 13:05:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:11.875 13:05:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:11.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:11.875 13:05:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:11.875 13:05:30 -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 [2024-06-11 13:05:30.694343] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:11.875 [2024-06-11 13:05:30.694782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126210 ] 00:20:11.875 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:11.875 Zero copy mechanism will not be used. 00:20:12.133 [2024-06-11 13:05:30.860916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.392 [2024-06-11 13:05:31.042029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.392 [2024-06-11 13:05:31.213745] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.959 13:05:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:12.959 13:05:31 -- common/autotest_common.sh@852 -- # return 0 00:20:12.959 13:05:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:12.959 13:05:31 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:12.959 13:05:31 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:13.218 BaseBdev1_malloc 00:20:13.218 13:05:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:13.475 [2024-06-11 13:05:32.079593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:13.475 [2024-06-11 13:05:32.079814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.475 [2024-06-11 13:05:32.079877] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:13.475 [2024-06-11 13:05:32.080162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.475 [2024-06-11 13:05:32.082336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.475 [2024-06-11 13:05:32.082496] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:13.475 BaseBdev1 00:20:13.475 13:05:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:13.475 13:05:32 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:13.475 13:05:32 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:13.475 BaseBdev2_malloc 00:20:13.734 13:05:32 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:13.734 [2024-06-11 13:05:32.504777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:13.734 [2024-06-11 13:05:32.505051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.734 [2024-06-11 13:05:32.505251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:13.734 [2024-06-11 13:05:32.505394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.734 [2024-06-11 13:05:32.507390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.734 [2024-06-11 13:05:32.507549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:13.734 BaseBdev2 00:20:13.734 13:05:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:13.992 spare_malloc 00:20:13.992 13:05:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:14.250 spare_delay 00:20:14.250 13:05:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:14.509 [2024-06-11 13:05:33.111201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:14.509 [2024-06-11 13:05:33.111404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.509 [2024-06-11 13:05:33.111477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:14.509 [2024-06-11 13:05:33.111727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.509 [2024-06-11 13:05:33.113726] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.509 [2024-06-11 13:05:33.113897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:14.509 spare 00:20:14.509 13:05:33 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:14.509 [2024-06-11 13:05:33.343297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:14.509 [2024-06-11 13:05:33.345161] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.509 [2024-06-11 13:05:33.345480] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:14.509 [2024-06-11 13:05:33.345572] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:14.509 [2024-06-11 13:05:33.345857] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:14.509 [2024-06-11 13:05:33.346421] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:14.509 [2024-06-11 13:05:33.346561] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:14.509 [2024-06-11 13:05:33.346841] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.767 "name": "raid_bdev1", 00:20:14.767 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:14.767 "strip_size_kb": 0, 00:20:14.767 "state": "online", 00:20:14.767 "raid_level": "raid1", 00:20:14.767 "superblock": true, 00:20:14.767 "num_base_bdevs": 2, 00:20:14.767 "num_base_bdevs_discovered": 2, 00:20:14.767 "num_base_bdevs_operational": 2, 00:20:14.767 "base_bdevs_list": [ 00:20:14.767 { 00:20:14.767 "name": "BaseBdev1", 00:20:14.767 "uuid": "102fc6a2-883c-5910-83bd-4614aa5c31d2", 00:20:14.767 "is_configured": true, 00:20:14.767 "data_offset": 2048, 00:20:14.767 "data_size": 63488 00:20:14.767 }, 00:20:14.767 { 00:20:14.767 "name": "BaseBdev2", 00:20:14.767 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:14.767 "is_configured": true, 00:20:14.767 "data_offset": 2048, 00:20:14.767 "data_size": 63488 00:20:14.767 } 00:20:14.767 ] 00:20:14.767 }' 00:20:14.767 13:05:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.767 13:05:33 -- common/autotest_common.sh@10 -- # set +x 00:20:15.702 13:05:34 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:15.702 13:05:34 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:15.702 [2024-06-11 13:05:34.467698] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.702 13:05:34 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:15.702 13:05:34 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.702 13:05:34 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:15.961 13:05:34 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:15.962 13:05:34 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:15.962 13:05:34 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:15.962 13:05:34 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@12 -- # local i 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:15.962 13:05:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:16.220 [2024-06-11 13:05:34.927620] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:16.220 /dev/nbd0 00:20:16.220 13:05:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:16.220 13:05:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:16.220 13:05:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:16.220 13:05:34 -- common/autotest_common.sh@857 -- # local i 00:20:16.220 13:05:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:16.220 13:05:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:16.220 13:05:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:16.220 13:05:34 -- common/autotest_common.sh@861 -- # break 00:20:16.220 13:05:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:16.220 13:05:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:16.221 13:05:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:16.221 1+0 records in 00:20:16.221 1+0 records out 00:20:16.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048731 s, 8.4 MB/s 00:20:16.221 13:05:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.221 13:05:34 -- common/autotest_common.sh@874 -- # size=4096 00:20:16.221 13:05:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.221 13:05:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:16.221 13:05:34 -- common/autotest_common.sh@877 -- # return 0 00:20:16.221 13:05:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:16.221 13:05:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:16.221 13:05:34 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:16.221 13:05:34 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:16.221 13:05:34 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:21.488 63488+0 records in 00:20:21.488 63488+0 records out 00:20:21.488 32505856 bytes (33 MB, 31 MiB) copied, 4.84874 s, 6.7 MB/s 00:20:21.488 13:05:39 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:21.488 13:05:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:21.488 13:05:39 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:21.488 13:05:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:21.488 13:05:39 -- bdev/nbd_common.sh@51 -- # local i 00:20:21.488 13:05:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:21.488 13:05:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:21.488 [2024-06-11 13:05:40.102271] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@41 -- # break 00:20:21.488 13:05:40 -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.488 13:05:40 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:21.748 [2024-06-11 13:05:40.446055] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.748 13:05:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.006 13:05:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.006 "name": "raid_bdev1", 00:20:22.006 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:22.006 "strip_size_kb": 0, 00:20:22.006 "state": "online", 00:20:22.006 "raid_level": "raid1", 00:20:22.006 "superblock": true, 00:20:22.006 "num_base_bdevs": 2, 00:20:22.006 "num_base_bdevs_discovered": 1, 00:20:22.006 "num_base_bdevs_operational": 1, 00:20:22.006 "base_bdevs_list": [ 00:20:22.006 { 00:20:22.006 "name": null, 00:20:22.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.006 "is_configured": false, 00:20:22.006 "data_offset": 2048, 00:20:22.006 "data_size": 63488 00:20:22.006 }, 00:20:22.006 { 00:20:22.006 "name": "BaseBdev2", 00:20:22.006 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:22.006 "is_configured": true, 00:20:22.006 "data_offset": 2048, 00:20:22.006 "data_size": 63488 00:20:22.006 } 00:20:22.006 ] 00:20:22.006 }' 00:20:22.006 13:05:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.006 13:05:40 -- common/autotest_common.sh@10 -- # set +x 00:20:22.574 13:05:41 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:22.832 [2024-06-11 13:05:41.574344] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:22.832 [2024-06-11 13:05:41.574673] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.832 [2024-06-11 13:05:41.589230] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4e30 00:20:22.832 [2024-06-11 13:05:41.591391] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:22.832 13:05:41 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:23.803 13:05:42 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.803 13:05:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:23.803 13:05:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:23.803 13:05:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:23.803 13:05:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:23.803 13:05:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.803 13:05:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.061 13:05:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:24.061 "name": "raid_bdev1", 00:20:24.061 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:24.061 "strip_size_kb": 0, 00:20:24.061 "state": "online", 00:20:24.061 "raid_level": "raid1", 00:20:24.061 "superblock": true, 00:20:24.061 "num_base_bdevs": 2, 00:20:24.061 "num_base_bdevs_discovered": 2, 00:20:24.061 "num_base_bdevs_operational": 2, 00:20:24.061 "process": { 00:20:24.061 "type": "rebuild", 00:20:24.061 "target": "spare", 00:20:24.061 "progress": { 00:20:24.061 "blocks": 22528, 00:20:24.061 "percent": 35 00:20:24.061 } 00:20:24.061 }, 00:20:24.061 "base_bdevs_list": [ 00:20:24.061 { 00:20:24.061 "name": "spare", 00:20:24.061 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:24.061 "is_configured": true, 00:20:24.061 "data_offset": 2048, 00:20:24.061 "data_size": 63488 00:20:24.061 }, 00:20:24.061 { 00:20:24.061 "name": "BaseBdev2", 00:20:24.061 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:24.061 "is_configured": true, 00:20:24.061 "data_offset": 2048, 00:20:24.061 "data_size": 63488 00:20:24.061 } 00:20:24.061 ] 00:20:24.061 }' 00:20:24.061 13:05:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:24.061 13:05:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.061 13:05:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:24.319 13:05:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.319 13:05:42 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:24.319 [2024-06-11 13:05:43.144941] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:24.576 [2024-06-11 13:05:43.202061] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:24.576 [2024-06-11 13:05:43.202307] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.577 13:05:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.835 13:05:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.835 "name": "raid_bdev1", 00:20:24.835 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:24.835 "strip_size_kb": 0, 00:20:24.835 "state": "online", 00:20:24.835 "raid_level": "raid1", 00:20:24.835 "superblock": true, 00:20:24.835 "num_base_bdevs": 2, 00:20:24.835 "num_base_bdevs_discovered": 1, 00:20:24.835 "num_base_bdevs_operational": 1, 00:20:24.835 "base_bdevs_list": [ 00:20:24.835 { 00:20:24.835 "name": null, 00:20:24.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.835 "is_configured": false, 00:20:24.835 "data_offset": 2048, 00:20:24.835 "data_size": 63488 00:20:24.835 }, 00:20:24.835 { 00:20:24.835 "name": "BaseBdev2", 00:20:24.835 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:24.835 "is_configured": true, 00:20:24.835 "data_offset": 2048, 00:20:24.835 "data_size": 63488 00:20:24.835 } 00:20:24.835 ] 00:20:24.835 }' 00:20:24.835 13:05:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.835 13:05:43 -- common/autotest_common.sh@10 -- # set +x 00:20:25.399 13:05:44 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.399 13:05:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:25.399 13:05:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:25.399 13:05:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:25.399 13:05:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:25.399 13:05:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.399 13:05:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.656 13:05:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:25.656 "name": "raid_bdev1", 00:20:25.656 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:25.656 "strip_size_kb": 0, 00:20:25.656 "state": "online", 00:20:25.656 "raid_level": "raid1", 00:20:25.656 "superblock": true, 00:20:25.656 "num_base_bdevs": 2, 00:20:25.656 "num_base_bdevs_discovered": 1, 00:20:25.656 "num_base_bdevs_operational": 1, 00:20:25.656 "base_bdevs_list": [ 00:20:25.656 { 00:20:25.656 "name": null, 00:20:25.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.656 "is_configured": false, 00:20:25.656 "data_offset": 2048, 00:20:25.656 "data_size": 63488 00:20:25.656 }, 00:20:25.656 { 00:20:25.656 "name": "BaseBdev2", 00:20:25.656 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:25.656 "is_configured": true, 00:20:25.656 "data_offset": 2048, 00:20:25.656 "data_size": 63488 00:20:25.656 } 00:20:25.656 ] 00:20:25.656 }' 00:20:25.656 13:05:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:25.656 13:05:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:25.656 13:05:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:25.656 13:05:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:25.656 13:05:44 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:25.914 [2024-06-11 13:05:44.648523] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:25.914 [2024-06-11 13:05:44.648776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.914 [2024-06-11 13:05:44.661845] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4fd0 00:20:25.914 [2024-06-11 13:05:44.664024] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.914 13:05:44 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:26.846 13:05:45 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.846 13:05:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:26.846 13:05:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:26.846 13:05:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:26.846 13:05:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:26.846 13:05:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.846 13:05:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.104 13:05:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.104 "name": "raid_bdev1", 00:20:27.104 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:27.104 "strip_size_kb": 0, 00:20:27.104 "state": "online", 00:20:27.104 "raid_level": "raid1", 00:20:27.104 "superblock": true, 00:20:27.104 "num_base_bdevs": 2, 00:20:27.104 "num_base_bdevs_discovered": 2, 00:20:27.104 "num_base_bdevs_operational": 2, 00:20:27.104 "process": { 00:20:27.104 "type": "rebuild", 00:20:27.104 "target": "spare", 00:20:27.104 "progress": { 00:20:27.104 "blocks": 24576, 00:20:27.104 "percent": 38 00:20:27.104 } 00:20:27.104 }, 00:20:27.104 "base_bdevs_list": [ 00:20:27.104 { 00:20:27.104 "name": "spare", 00:20:27.104 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:27.104 "is_configured": true, 00:20:27.104 "data_offset": 2048, 00:20:27.104 "data_size": 63488 00:20:27.104 }, 00:20:27.104 { 00:20:27.104 "name": "BaseBdev2", 00:20:27.104 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:27.104 "is_configured": true, 00:20:27.104 "data_offset": 2048, 00:20:27.104 "data_size": 63488 00:20:27.104 } 00:20:27.104 ] 00:20:27.104 }' 00:20:27.104 13:05:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.363 13:05:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.363 13:05:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:27.363 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@657 -- # local timeout=418 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.363 13:05:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.622 13:05:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.622 "name": "raid_bdev1", 00:20:27.622 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:27.622 "strip_size_kb": 0, 00:20:27.622 "state": "online", 00:20:27.622 "raid_level": "raid1", 00:20:27.622 "superblock": true, 00:20:27.622 "num_base_bdevs": 2, 00:20:27.622 "num_base_bdevs_discovered": 2, 00:20:27.622 "num_base_bdevs_operational": 2, 00:20:27.622 "process": { 00:20:27.622 "type": "rebuild", 00:20:27.622 "target": "spare", 00:20:27.622 "progress": { 00:20:27.622 "blocks": 32768, 00:20:27.622 "percent": 51 00:20:27.622 } 00:20:27.622 }, 00:20:27.622 "base_bdevs_list": [ 00:20:27.622 { 00:20:27.622 "name": "spare", 00:20:27.622 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:27.622 "is_configured": true, 00:20:27.622 "data_offset": 2048, 00:20:27.622 "data_size": 63488 00:20:27.622 }, 00:20:27.622 { 00:20:27.622 "name": "BaseBdev2", 00:20:27.622 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:27.622 "is_configured": true, 00:20:27.622 "data_offset": 2048, 00:20:27.622 "data_size": 63488 00:20:27.622 } 00:20:27.622 ] 00:20:27.622 }' 00:20:27.622 13:05:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.622 13:05:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.622 13:05:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.622 13:05:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.622 13:05:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:28.999 "name": "raid_bdev1", 00:20:28.999 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:28.999 "strip_size_kb": 0, 00:20:28.999 "state": "online", 00:20:28.999 "raid_level": "raid1", 00:20:28.999 "superblock": true, 00:20:28.999 "num_base_bdevs": 2, 00:20:28.999 "num_base_bdevs_discovered": 2, 00:20:28.999 "num_base_bdevs_operational": 2, 00:20:28.999 "process": { 00:20:28.999 "type": "rebuild", 00:20:28.999 "target": "spare", 00:20:28.999 "progress": { 00:20:28.999 "blocks": 59392, 00:20:28.999 "percent": 93 00:20:28.999 } 00:20:28.999 }, 00:20:28.999 "base_bdevs_list": [ 00:20:28.999 { 00:20:28.999 "name": "spare", 00:20:28.999 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:28.999 "is_configured": true, 00:20:28.999 "data_offset": 2048, 00:20:28.999 "data_size": 63488 00:20:28.999 }, 00:20:28.999 { 00:20:28.999 "name": "BaseBdev2", 00:20:28.999 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:28.999 "is_configured": true, 00:20:28.999 "data_offset": 2048, 00:20:28.999 "data_size": 63488 00:20:28.999 } 00:20:28.999 ] 00:20:28.999 }' 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.999 13:05:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:29.000 [2024-06-11 13:05:47.784090] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:29.000 [2024-06-11 13:05:47.784456] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:29.000 [2024-06-11 13:05:47.784772] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.376 "name": "raid_bdev1", 00:20:30.376 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:30.376 "strip_size_kb": 0, 00:20:30.376 "state": "online", 00:20:30.376 "raid_level": "raid1", 00:20:30.376 "superblock": true, 00:20:30.376 "num_base_bdevs": 2, 00:20:30.376 "num_base_bdevs_discovered": 2, 00:20:30.376 "num_base_bdevs_operational": 2, 00:20:30.376 "base_bdevs_list": [ 00:20:30.376 { 00:20:30.376 "name": "spare", 00:20:30.376 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:30.376 "is_configured": true, 00:20:30.376 "data_offset": 2048, 00:20:30.376 "data_size": 63488 00:20:30.376 }, 00:20:30.376 { 00:20:30.376 "name": "BaseBdev2", 00:20:30.376 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:30.376 "is_configured": true, 00:20:30.376 "data_offset": 2048, 00:20:30.376 "data_size": 63488 00:20:30.376 } 00:20:30.376 ] 00:20:30.376 }' 00:20:30.376 13:05:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@660 -- # break 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.376 13:05:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.635 13:05:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.635 "name": "raid_bdev1", 00:20:30.635 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:30.635 "strip_size_kb": 0, 00:20:30.635 "state": "online", 00:20:30.635 "raid_level": "raid1", 00:20:30.635 "superblock": true, 00:20:30.635 "num_base_bdevs": 2, 00:20:30.635 "num_base_bdevs_discovered": 2, 00:20:30.635 "num_base_bdevs_operational": 2, 00:20:30.635 "base_bdevs_list": [ 00:20:30.635 { 00:20:30.635 "name": "spare", 00:20:30.635 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:30.635 "is_configured": true, 00:20:30.635 "data_offset": 2048, 00:20:30.635 "data_size": 63488 00:20:30.635 }, 00:20:30.635 { 00:20:30.635 "name": "BaseBdev2", 00:20:30.635 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:30.635 "is_configured": true, 00:20:30.635 "data_offset": 2048, 00:20:30.635 "data_size": 63488 00:20:30.635 } 00:20:30.635 ] 00:20:30.635 }' 00:20:30.635 13:05:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.635 13:05:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:30.635 13:05:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.894 "name": "raid_bdev1", 00:20:30.894 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:30.894 "strip_size_kb": 0, 00:20:30.894 "state": "online", 00:20:30.894 "raid_level": "raid1", 00:20:30.894 "superblock": true, 00:20:30.894 "num_base_bdevs": 2, 00:20:30.894 "num_base_bdevs_discovered": 2, 00:20:30.894 "num_base_bdevs_operational": 2, 00:20:30.894 "base_bdevs_list": [ 00:20:30.894 { 00:20:30.894 "name": "spare", 00:20:30.894 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:30.894 "is_configured": true, 00:20:30.894 "data_offset": 2048, 00:20:30.894 "data_size": 63488 00:20:30.894 }, 00:20:30.894 { 00:20:30.894 "name": "BaseBdev2", 00:20:30.894 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:30.894 "is_configured": true, 00:20:30.894 "data_offset": 2048, 00:20:30.894 "data_size": 63488 00:20:30.894 } 00:20:30.894 ] 00:20:30.894 }' 00:20:30.894 13:05:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.894 13:05:49 -- common/autotest_common.sh@10 -- # set +x 00:20:31.829 13:05:50 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:31.829 [2024-06-11 13:05:50.587610] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:31.829 [2024-06-11 13:05:50.587867] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.829 [2024-06-11 13:05:50.588078] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.829 [2024-06-11 13:05:50.588280] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.829 [2024-06-11 13:05:50.588410] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:20:31.829 13:05:50 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.829 13:05:50 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:32.087 13:05:50 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:32.087 13:05:50 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:32.087 13:05:50 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@12 -- # local i 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:32.087 13:05:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:32.346 /dev/nbd0 00:20:32.346 13:05:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:32.346 13:05:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:32.346 13:05:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:32.346 13:05:50 -- common/autotest_common.sh@857 -- # local i 00:20:32.346 13:05:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:32.346 13:05:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:32.346 13:05:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:32.346 13:05:51 -- common/autotest_common.sh@861 -- # break 00:20:32.346 13:05:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:32.346 13:05:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:32.346 13:05:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:32.346 1+0 records in 00:20:32.346 1+0 records out 00:20:32.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688165 s, 6.0 MB/s 00:20:32.346 13:05:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.346 13:05:51 -- common/autotest_common.sh@874 -- # size=4096 00:20:32.346 13:05:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.346 13:05:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:32.346 13:05:51 -- common/autotest_common.sh@877 -- # return 0 00:20:32.346 13:05:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:32.346 13:05:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:32.346 13:05:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:32.604 /dev/nbd1 00:20:32.604 13:05:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:32.604 13:05:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:32.604 13:05:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:32.604 13:05:51 -- common/autotest_common.sh@857 -- # local i 00:20:32.604 13:05:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:32.604 13:05:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:32.604 13:05:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:32.604 13:05:51 -- common/autotest_common.sh@861 -- # break 00:20:32.604 13:05:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:32.604 13:05:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:32.604 13:05:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:32.604 1+0 records in 00:20:32.604 1+0 records out 00:20:32.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675105 s, 6.1 MB/s 00:20:32.604 13:05:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.604 13:05:51 -- common/autotest_common.sh@874 -- # size=4096 00:20:32.604 13:05:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.604 13:05:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:32.604 13:05:51 -- common/autotest_common.sh@877 -- # return 0 00:20:32.604 13:05:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:32.604 13:05:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:32.604 13:05:51 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:32.863 13:05:51 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@51 -- # local i 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:32.863 13:05:51 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:33.121 13:05:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:33.121 13:05:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.121 13:05:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:33.121 13:05:51 -- bdev/nbd_common.sh@41 -- # break 00:20:33.121 13:05:51 -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.121 13:05:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:33.121 13:05:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@41 -- # break 00:20:33.380 13:05:52 -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.380 13:05:52 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:33.380 13:05:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:33.380 13:05:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:33.380 13:05:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:33.638 13:05:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:33.897 [2024-06-11 13:05:52.629830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:33.897 [2024-06-11 13:05:52.630167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.897 [2024-06-11 13:05:52.630337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:33.897 [2024-06-11 13:05:52.630466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.897 [2024-06-11 13:05:52.632914] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.897 [2024-06-11 13:05:52.633116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:33.897 [2024-06-11 13:05:52.633339] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:33.897 [2024-06-11 13:05:52.633523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:33.897 BaseBdev1 00:20:33.897 13:05:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:33.897 13:05:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:33.897 13:05:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:34.155 13:05:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:34.413 [2024-06-11 13:05:53.133940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:34.413 [2024-06-11 13:05:53.134168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.413 [2024-06-11 13:05:53.134240] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:34.413 [2024-06-11 13:05:53.134538] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.413 [2024-06-11 13:05:53.135105] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.413 [2024-06-11 13:05:53.135297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:34.413 [2024-06-11 13:05:53.135492] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:34.413 [2024-06-11 13:05:53.135613] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:34.413 [2024-06-11 13:05:53.135710] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.413 [2024-06-11 13:05:53.135852] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:20:34.413 [2024-06-11 13:05:53.136021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.413 BaseBdev2 00:20:34.413 13:05:53 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:34.672 13:05:53 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:34.672 [2024-06-11 13:05:53.497990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:34.672 [2024-06-11 13:05:53.498206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.672 [2024-06-11 13:05:53.498284] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:34.672 [2024-06-11 13:05:53.498456] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.672 [2024-06-11 13:05:53.498962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.672 [2024-06-11 13:05:53.499137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:34.672 [2024-06-11 13:05:53.499353] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:34.672 [2024-06-11 13:05:53.499482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.672 spare 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:34.931 13:05:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.932 13:05:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.932 [2024-06-11 13:05:53.599688] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:20:34.932 [2024-06-11 13:05:53.599841] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:34.932 [2024-06-11 13:05:53.600017] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5b10 00:20:34.932 [2024-06-11 13:05:53.600637] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:20:34.932 [2024-06-11 13:05:53.600793] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:20:34.932 [2024-06-11 13:05:53.601016] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.932 13:05:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.932 "name": "raid_bdev1", 00:20:34.932 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:34.932 "strip_size_kb": 0, 00:20:34.932 "state": "online", 00:20:34.932 "raid_level": "raid1", 00:20:34.932 "superblock": true, 00:20:34.932 "num_base_bdevs": 2, 00:20:34.932 "num_base_bdevs_discovered": 2, 00:20:34.932 "num_base_bdevs_operational": 2, 00:20:34.932 "base_bdevs_list": [ 00:20:34.932 { 00:20:34.932 "name": "spare", 00:20:34.932 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:34.932 "is_configured": true, 00:20:34.932 "data_offset": 2048, 00:20:34.932 "data_size": 63488 00:20:34.932 }, 00:20:34.932 { 00:20:34.932 "name": "BaseBdev2", 00:20:34.932 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:34.932 "is_configured": true, 00:20:34.932 "data_offset": 2048, 00:20:34.932 "data_size": 63488 00:20:34.932 } 00:20:34.932 ] 00:20:34.932 }' 00:20:34.932 13:05:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.932 13:05:53 -- common/autotest_common.sh@10 -- # set +x 00:20:35.498 13:05:54 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.498 13:05:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:35.498 13:05:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:35.498 13:05:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:35.498 13:05:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:35.498 13:05:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.498 13:05:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.756 13:05:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:35.756 "name": "raid_bdev1", 00:20:35.756 "uuid": "31c5f424-fed3-4fe5-962e-d9e9d7ee85df", 00:20:35.756 "strip_size_kb": 0, 00:20:35.756 "state": "online", 00:20:35.756 "raid_level": "raid1", 00:20:35.756 "superblock": true, 00:20:35.756 "num_base_bdevs": 2, 00:20:35.756 "num_base_bdevs_discovered": 2, 00:20:35.756 "num_base_bdevs_operational": 2, 00:20:35.756 "base_bdevs_list": [ 00:20:35.756 { 00:20:35.756 "name": "spare", 00:20:35.756 "uuid": "5037c5bb-dfd8-5ee2-b396-3ad5b6550104", 00:20:35.756 "is_configured": true, 00:20:35.756 "data_offset": 2048, 00:20:35.756 "data_size": 63488 00:20:35.756 }, 00:20:35.756 { 00:20:35.756 "name": "BaseBdev2", 00:20:35.756 "uuid": "479affb6-32d6-5500-8a24-efcebcedc711", 00:20:35.756 "is_configured": true, 00:20:35.756 "data_offset": 2048, 00:20:35.756 "data_size": 63488 00:20:35.756 } 00:20:35.756 ] 00:20:35.756 }' 00:20:35.756 13:05:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:35.756 13:05:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:35.756 13:05:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:36.016 13:05:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:36.016 13:05:54 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.016 13:05:54 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:36.016 13:05:54 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.016 13:05:54 -- bdev/bdev_raid.sh@709 -- # killprocess 126210 00:20:36.016 13:05:54 -- common/autotest_common.sh@926 -- # '[' -z 126210 ']' 00:20:36.016 13:05:54 -- common/autotest_common.sh@930 -- # kill -0 126210 00:20:36.016 13:05:54 -- common/autotest_common.sh@931 -- # uname 00:20:36.016 13:05:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:36.016 13:05:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126210 00:20:36.016 13:05:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:36.016 13:05:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:36.016 13:05:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126210' 00:20:36.016 killing process with pid 126210 00:20:36.016 13:05:54 -- common/autotest_common.sh@945 -- # kill 126210 00:20:36.016 13:05:54 -- common/autotest_common.sh@950 -- # wait 126210 00:20:36.016 Received shutdown signal, test time was about 60.000000 seconds 00:20:36.016 00:20:36.016 Latency(us) 00:20:36.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.016 =================================================================================================================== 00:20:36.016 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.016 [2024-06-11 13:05:54.804270] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:36.016 [2024-06-11 13:05:54.804354] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.016 [2024-06-11 13:05:54.804425] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.016 [2024-06-11 13:05:54.804437] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:20:36.274 [2024-06-11 13:05:55.012145] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.649 ************************************ 00:20:37.649 END TEST raid_rebuild_test_sb 00:20:37.649 ************************************ 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:37.649 00:20:37.649 real 0m25.420s 00:20:37.649 user 0m37.192s 00:20:37.649 sys 0m3.771s 00:20:37.649 13:05:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.649 13:05:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:20:37.649 13:05:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:37.649 13:05:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:37.649 13:05:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.649 ************************************ 00:20:37.649 START TEST raid_rebuild_test_io 00:20:37.649 ************************************ 00:20:37.649 13:05:56 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@544 -- # raid_pid=126896 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126896 /var/tmp/spdk-raid.sock 00:20:37.649 13:05:56 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:37.649 13:05:56 -- common/autotest_common.sh@819 -- # '[' -z 126896 ']' 00:20:37.649 13:05:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:37.649 13:05:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.649 13:05:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:37.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:37.649 13:05:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.649 13:05:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.649 [2024-06-11 13:05:56.175001] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:37.649 [2024-06-11 13:05:56.175388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126896 ] 00:20:37.649 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:37.649 Zero copy mechanism will not be used. 00:20:37.649 [2024-06-11 13:05:56.335842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.908 [2024-06-11 13:05:56.524130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.908 [2024-06-11 13:05:56.711459] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.473 13:05:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.473 13:05:57 -- common/autotest_common.sh@852 -- # return 0 00:20:38.473 13:05:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:38.473 13:05:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:38.473 13:05:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:38.731 BaseBdev1 00:20:38.731 13:05:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:38.731 13:05:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:38.731 13:05:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:38.990 BaseBdev2 00:20:38.990 13:05:57 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:38.990 spare_malloc 00:20:38.990 13:05:57 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:39.248 spare_delay 00:20:39.248 13:05:58 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:39.507 [2024-06-11 13:05:58.194922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:39.507 [2024-06-11 13:05:58.195393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.507 [2024-06-11 13:05:58.195598] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:39.507 [2024-06-11 13:05:58.195818] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.507 [2024-06-11 13:05:58.198833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.507 [2024-06-11 13:05:58.199036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:39.507 spare 00:20:39.507 13:05:58 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:39.765 [2024-06-11 13:05:58.379542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.765 [2024-06-11 13:05:58.381922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:39.765 [2024-06-11 13:05:58.382166] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:20:39.765 [2024-06-11 13:05:58.382320] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:39.766 [2024-06-11 13:05:58.382570] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:39.766 [2024-06-11 13:05:58.383120] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:20:39.766 [2024-06-11 13:05:58.383232] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:20:39.766 [2024-06-11 13:05:58.383562] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.766 13:05:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.024 13:05:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.024 "name": "raid_bdev1", 00:20:40.024 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:40.024 "strip_size_kb": 0, 00:20:40.024 "state": "online", 00:20:40.024 "raid_level": "raid1", 00:20:40.024 "superblock": false, 00:20:40.024 "num_base_bdevs": 2, 00:20:40.024 "num_base_bdevs_discovered": 2, 00:20:40.024 "num_base_bdevs_operational": 2, 00:20:40.024 "base_bdevs_list": [ 00:20:40.024 { 00:20:40.024 "name": "BaseBdev1", 00:20:40.024 "uuid": "8c4b6872-0c9c-4902-a209-3fb78e9a0efe", 00:20:40.024 "is_configured": true, 00:20:40.024 "data_offset": 0, 00:20:40.024 "data_size": 65536 00:20:40.024 }, 00:20:40.024 { 00:20:40.024 "name": "BaseBdev2", 00:20:40.024 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:40.024 "is_configured": true, 00:20:40.024 "data_offset": 0, 00:20:40.024 "data_size": 65536 00:20:40.024 } 00:20:40.024 ] 00:20:40.024 }' 00:20:40.024 13:05:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.024 13:05:58 -- common/autotest_common.sh@10 -- # set +x 00:20:40.592 13:05:59 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:40.592 13:05:59 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:40.851 [2024-06-11 13:05:59.447824] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.851 13:05:59 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:40.851 13:05:59 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.851 13:05:59 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:40.851 13:05:59 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:40.851 13:05:59 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:40.851 13:05:59 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:40.851 13:05:59 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:41.110 [2024-06-11 13:05:59.751117] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:41.110 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:41.110 Zero copy mechanism will not be used. 00:20:41.110 Running I/O for 60 seconds... 00:20:41.110 [2024-06-11 13:05:59.828194] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:41.110 [2024-06-11 13:05:59.834516] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.110 13:05:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.369 13:06:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.369 "name": "raid_bdev1", 00:20:41.369 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:41.369 "strip_size_kb": 0, 00:20:41.369 "state": "online", 00:20:41.369 "raid_level": "raid1", 00:20:41.369 "superblock": false, 00:20:41.369 "num_base_bdevs": 2, 00:20:41.369 "num_base_bdevs_discovered": 1, 00:20:41.369 "num_base_bdevs_operational": 1, 00:20:41.369 "base_bdevs_list": [ 00:20:41.369 { 00:20:41.369 "name": null, 00:20:41.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.369 "is_configured": false, 00:20:41.369 "data_offset": 0, 00:20:41.369 "data_size": 65536 00:20:41.369 }, 00:20:41.369 { 00:20:41.369 "name": "BaseBdev2", 00:20:41.369 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:41.369 "is_configured": true, 00:20:41.369 "data_offset": 0, 00:20:41.369 "data_size": 65536 00:20:41.369 } 00:20:41.369 ] 00:20:41.369 }' 00:20:41.369 13:06:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.369 13:06:00 -- common/autotest_common.sh@10 -- # set +x 00:20:41.937 13:06:00 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:42.196 [2024-06-11 13:06:00.924810] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:42.196 [2024-06-11 13:06:00.925110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:42.196 13:06:00 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:42.196 [2024-06-11 13:06:00.977877] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:42.196 [2024-06-11 13:06:00.980093] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:42.454 [2024-06-11 13:06:01.088732] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:42.454 [2024-06-11 13:06:01.089339] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:42.454 [2024-06-11 13:06:01.291482] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:42.454 [2024-06-11 13:06:01.291803] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:43.021 [2024-06-11 13:06:01.622800] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:43.021 [2024-06-11 13:06:01.749136] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:43.021 [2024-06-11 13:06:01.749484] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:43.280 13:06:01 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.280 13:06:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:43.280 13:06:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:43.280 13:06:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:43.280 13:06:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:43.280 13:06:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.280 13:06:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.280 [2024-06-11 13:06:02.054721] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:43.280 [2024-06-11 13:06:02.055494] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:43.537 13:06:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.537 "name": "raid_bdev1", 00:20:43.537 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:43.537 "strip_size_kb": 0, 00:20:43.537 "state": "online", 00:20:43.537 "raid_level": "raid1", 00:20:43.537 "superblock": false, 00:20:43.537 "num_base_bdevs": 2, 00:20:43.537 "num_base_bdevs_discovered": 2, 00:20:43.537 "num_base_bdevs_operational": 2, 00:20:43.537 "process": { 00:20:43.537 "type": "rebuild", 00:20:43.537 "target": "spare", 00:20:43.537 "progress": { 00:20:43.537 "blocks": 14336, 00:20:43.537 "percent": 21 00:20:43.537 } 00:20:43.537 }, 00:20:43.537 "base_bdevs_list": [ 00:20:43.537 { 00:20:43.537 "name": "spare", 00:20:43.537 "uuid": "5507ba60-c2ac-5de4-8492-beede90c8aab", 00:20:43.537 "is_configured": true, 00:20:43.537 "data_offset": 0, 00:20:43.537 "data_size": 65536 00:20:43.537 }, 00:20:43.537 { 00:20:43.537 "name": "BaseBdev2", 00:20:43.537 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:43.537 "is_configured": true, 00:20:43.537 "data_offset": 0, 00:20:43.537 "data_size": 65536 00:20:43.537 } 00:20:43.537 ] 00:20:43.537 }' 00:20:43.537 13:06:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.537 [2024-06-11 13:06:02.190120] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:43.537 13:06:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.537 13:06:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:43.537 13:06:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.537 13:06:02 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:43.794 [2024-06-11 13:06:02.508395] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:43.794 [2024-06-11 13:06:02.547219] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.794 [2024-06-11 13:06:02.629210] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:44.050 [2024-06-11 13:06:02.635483] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:44.050 [2024-06-11 13:06:02.742744] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:44.050 [2024-06-11 13:06:02.750495] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.050 [2024-06-11 13:06:02.789515] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.050 13:06:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.308 13:06:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.308 "name": "raid_bdev1", 00:20:44.308 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:44.308 "strip_size_kb": 0, 00:20:44.308 "state": "online", 00:20:44.308 "raid_level": "raid1", 00:20:44.308 "superblock": false, 00:20:44.308 "num_base_bdevs": 2, 00:20:44.308 "num_base_bdevs_discovered": 1, 00:20:44.308 "num_base_bdevs_operational": 1, 00:20:44.308 "base_bdevs_list": [ 00:20:44.308 { 00:20:44.308 "name": null, 00:20:44.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.308 "is_configured": false, 00:20:44.308 "data_offset": 0, 00:20:44.308 "data_size": 65536 00:20:44.308 }, 00:20:44.308 { 00:20:44.308 "name": "BaseBdev2", 00:20:44.308 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:44.308 "is_configured": true, 00:20:44.308 "data_offset": 0, 00:20:44.308 "data_size": 65536 00:20:44.308 } 00:20:44.308 ] 00:20:44.308 }' 00:20:44.308 13:06:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.308 13:06:03 -- common/autotest_common.sh@10 -- # set +x 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:45.245 "name": "raid_bdev1", 00:20:45.245 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:45.245 "strip_size_kb": 0, 00:20:45.245 "state": "online", 00:20:45.245 "raid_level": "raid1", 00:20:45.245 "superblock": false, 00:20:45.245 "num_base_bdevs": 2, 00:20:45.245 "num_base_bdevs_discovered": 1, 00:20:45.245 "num_base_bdevs_operational": 1, 00:20:45.245 "base_bdevs_list": [ 00:20:45.245 { 00:20:45.245 "name": null, 00:20:45.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.245 "is_configured": false, 00:20:45.245 "data_offset": 0, 00:20:45.245 "data_size": 65536 00:20:45.245 }, 00:20:45.245 { 00:20:45.245 "name": "BaseBdev2", 00:20:45.245 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:45.245 "is_configured": true, 00:20:45.245 "data_offset": 0, 00:20:45.245 "data_size": 65536 00:20:45.245 } 00:20:45.245 ] 00:20:45.245 }' 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:45.245 13:06:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:45.245 13:06:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:45.245 13:06:04 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:45.505 [2024-06-11 13:06:04.279088] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:45.505 [2024-06-11 13:06:04.279424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.505 13:06:04 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:45.505 [2024-06-11 13:06:04.324992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:45.505 [2024-06-11 13:06:04.327224] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:45.763 [2024-06-11 13:06:04.453718] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:45.763 [2024-06-11 13:06:04.454217] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:46.022 [2024-06-11 13:06:04.676149] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:46.022 [2024-06-11 13:06:04.676563] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:46.280 [2024-06-11 13:06:04.999163] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:46.539 [2024-06-11 13:06:05.120999] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:46.539 13:06:05 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.539 13:06:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:46.539 13:06:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:46.539 13:06:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:46.539 13:06:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:46.539 13:06:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.539 13:06:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.539 [2024-06-11 13:06:05.370267] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:46.539 [2024-06-11 13:06:05.370789] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:46.797 13:06:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:46.797 "name": "raid_bdev1", 00:20:46.797 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:46.797 "strip_size_kb": 0, 00:20:46.797 "state": "online", 00:20:46.797 "raid_level": "raid1", 00:20:46.797 "superblock": false, 00:20:46.797 "num_base_bdevs": 2, 00:20:46.797 "num_base_bdevs_discovered": 2, 00:20:46.797 "num_base_bdevs_operational": 2, 00:20:46.797 "process": { 00:20:46.797 "type": "rebuild", 00:20:46.797 "target": "spare", 00:20:46.797 "progress": { 00:20:46.797 "blocks": 14336, 00:20:46.797 "percent": 21 00:20:46.797 } 00:20:46.797 }, 00:20:46.798 "base_bdevs_list": [ 00:20:46.798 { 00:20:46.798 "name": "spare", 00:20:46.798 "uuid": "5507ba60-c2ac-5de4-8492-beede90c8aab", 00:20:46.798 "is_configured": true, 00:20:46.798 "data_offset": 0, 00:20:46.798 "data_size": 65536 00:20:46.798 }, 00:20:46.798 { 00:20:46.798 "name": "BaseBdev2", 00:20:46.798 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:46.798 "is_configured": true, 00:20:46.798 "data_offset": 0, 00:20:46.798 "data_size": 65536 00:20:46.798 } 00:20:46.798 ] 00:20:46.798 }' 00:20:46.798 13:06:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:46.798 13:06:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.798 13:06:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@657 -- # local timeout=437 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:47.056 "name": "raid_bdev1", 00:20:47.056 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:47.056 "strip_size_kb": 0, 00:20:47.056 "state": "online", 00:20:47.056 "raid_level": "raid1", 00:20:47.056 "superblock": false, 00:20:47.056 "num_base_bdevs": 2, 00:20:47.056 "num_base_bdevs_discovered": 2, 00:20:47.056 "num_base_bdevs_operational": 2, 00:20:47.056 "process": { 00:20:47.056 "type": "rebuild", 00:20:47.056 "target": "spare", 00:20:47.056 "progress": { 00:20:47.056 "blocks": 20480, 00:20:47.056 "percent": 31 00:20:47.056 } 00:20:47.056 }, 00:20:47.056 "base_bdevs_list": [ 00:20:47.056 { 00:20:47.056 "name": "spare", 00:20:47.056 "uuid": "5507ba60-c2ac-5de4-8492-beede90c8aab", 00:20:47.056 "is_configured": true, 00:20:47.056 "data_offset": 0, 00:20:47.056 "data_size": 65536 00:20:47.056 }, 00:20:47.056 { 00:20:47.056 "name": "BaseBdev2", 00:20:47.056 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:47.056 "is_configured": true, 00:20:47.056 "data_offset": 0, 00:20:47.056 "data_size": 65536 00:20:47.056 } 00:20:47.056 ] 00:20:47.056 }' 00:20:47.056 13:06:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:47.314 13:06:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.314 13:06:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:47.314 13:06:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.314 13:06:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:47.314 [2024-06-11 13:06:06.113836] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:47.573 [2024-06-11 13:06:06.348853] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:47.573 [2024-06-11 13:06:06.355502] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:48.139 [2024-06-11 13:06:06.700154] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:48.139 13:06:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:48.139 13:06:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.139 13:06:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:48.139 13:06:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:48.139 13:06:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:48.139 13:06:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:48.139 13:06:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.139 13:06:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.398 [2024-06-11 13:06:07.153020] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:48.398 13:06:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:48.398 "name": "raid_bdev1", 00:20:48.398 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:48.398 "strip_size_kb": 0, 00:20:48.398 "state": "online", 00:20:48.398 "raid_level": "raid1", 00:20:48.398 "superblock": false, 00:20:48.398 "num_base_bdevs": 2, 00:20:48.398 "num_base_bdevs_discovered": 2, 00:20:48.399 "num_base_bdevs_operational": 2, 00:20:48.399 "process": { 00:20:48.399 "type": "rebuild", 00:20:48.399 "target": "spare", 00:20:48.399 "progress": { 00:20:48.399 "blocks": 40960, 00:20:48.399 "percent": 62 00:20:48.399 } 00:20:48.399 }, 00:20:48.399 "base_bdevs_list": [ 00:20:48.399 { 00:20:48.399 "name": "spare", 00:20:48.399 "uuid": "5507ba60-c2ac-5de4-8492-beede90c8aab", 00:20:48.399 "is_configured": true, 00:20:48.399 "data_offset": 0, 00:20:48.399 "data_size": 65536 00:20:48.399 }, 00:20:48.399 { 00:20:48.399 "name": "BaseBdev2", 00:20:48.399 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:48.399 "is_configured": true, 00:20:48.399 "data_offset": 0, 00:20:48.399 "data_size": 65536 00:20:48.399 } 00:20:48.399 ] 00:20:48.399 }' 00:20:48.399 13:06:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:48.657 13:06:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.657 13:06:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.657 13:06:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.657 13:06:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:48.657 [2024-06-11 13:06:07.382590] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:48.657 [2024-06-11 13:06:07.383133] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:48.916 [2024-06-11 13:06:07.599713] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:49.484 [2024-06-11 13:06:08.042403] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:49.484 [2024-06-11 13:06:08.266340] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.743 [2024-06-11 13:06:08.486179] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:49.743 "name": "raid_bdev1", 00:20:49.743 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:49.743 "strip_size_kb": 0, 00:20:49.743 "state": "online", 00:20:49.743 "raid_level": "raid1", 00:20:49.743 "superblock": false, 00:20:49.743 "num_base_bdevs": 2, 00:20:49.743 "num_base_bdevs_discovered": 2, 00:20:49.743 "num_base_bdevs_operational": 2, 00:20:49.743 "process": { 00:20:49.743 "type": "rebuild", 00:20:49.743 "target": "spare", 00:20:49.743 "progress": { 00:20:49.743 "blocks": 59392, 00:20:49.743 "percent": 90 00:20:49.743 } 00:20:49.743 }, 00:20:49.743 "base_bdevs_list": [ 00:20:49.743 { 00:20:49.743 "name": "spare", 00:20:49.743 "uuid": "5507ba60-c2ac-5de4-8492-beede90c8aab", 00:20:49.743 "is_configured": true, 00:20:49.743 "data_offset": 0, 00:20:49.743 "data_size": 65536 00:20:49.743 }, 00:20:49.743 { 00:20:49.743 "name": "BaseBdev2", 00:20:49.743 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:49.743 "is_configured": true, 00:20:49.743 "data_offset": 0, 00:20:49.743 "data_size": 65536 00:20:49.743 } 00:20:49.743 ] 00:20:49.743 }' 00:20:49.743 13:06:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.001 13:06:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.001 13:06:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.001 13:06:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.001 13:06:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:50.260 [2024-06-11 13:06:08.914160] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:50.260 [2024-06-11 13:06:09.020042] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:50.260 [2024-06-11 13:06:09.022091] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.197 13:06:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.197 "name": "raid_bdev1", 00:20:51.197 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:51.198 "strip_size_kb": 0, 00:20:51.198 "state": "online", 00:20:51.198 "raid_level": "raid1", 00:20:51.198 "superblock": false, 00:20:51.198 "num_base_bdevs": 2, 00:20:51.198 "num_base_bdevs_discovered": 2, 00:20:51.198 "num_base_bdevs_operational": 2, 00:20:51.198 "base_bdevs_list": [ 00:20:51.198 { 00:20:51.198 "name": "spare", 00:20:51.198 "uuid": "5507ba60-c2ac-5de4-8492-beede90c8aab", 00:20:51.198 "is_configured": true, 00:20:51.198 "data_offset": 0, 00:20:51.198 "data_size": 65536 00:20:51.198 }, 00:20:51.198 { 00:20:51.198 "name": "BaseBdev2", 00:20:51.198 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:51.198 "is_configured": true, 00:20:51.198 "data_offset": 0, 00:20:51.198 "data_size": 65536 00:20:51.198 } 00:20:51.198 ] 00:20:51.198 }' 00:20:51.198 13:06:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.198 13:06:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:51.198 13:06:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@660 -- # break 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.456 13:06:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.715 "name": "raid_bdev1", 00:20:51.715 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:51.715 "strip_size_kb": 0, 00:20:51.715 "state": "online", 00:20:51.715 "raid_level": "raid1", 00:20:51.715 "superblock": false, 00:20:51.715 "num_base_bdevs": 2, 00:20:51.715 "num_base_bdevs_discovered": 2, 00:20:51.715 "num_base_bdevs_operational": 2, 00:20:51.715 "base_bdevs_list": [ 00:20:51.715 { 00:20:51.715 "name": "spare", 00:20:51.715 "uuid": "5507ba60-c2ac-5de4-8492-beede90c8aab", 00:20:51.715 "is_configured": true, 00:20:51.715 "data_offset": 0, 00:20:51.715 "data_size": 65536 00:20:51.715 }, 00:20:51.715 { 00:20:51.715 "name": "BaseBdev2", 00:20:51.715 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:51.715 "is_configured": true, 00:20:51.715 "data_offset": 0, 00:20:51.715 "data_size": 65536 00:20:51.715 } 00:20:51.715 ] 00:20:51.715 }' 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.715 13:06:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.974 13:06:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.974 "name": "raid_bdev1", 00:20:51.974 "uuid": "3ca5073a-e960-4bf2-86d7-22ceabb7b064", 00:20:51.974 "strip_size_kb": 0, 00:20:51.974 "state": "online", 00:20:51.974 "raid_level": "raid1", 00:20:51.974 "superblock": false, 00:20:51.974 "num_base_bdevs": 2, 00:20:51.974 "num_base_bdevs_discovered": 2, 00:20:51.974 "num_base_bdevs_operational": 2, 00:20:51.974 "base_bdevs_list": [ 00:20:51.974 { 00:20:51.974 "name": "spare", 00:20:51.974 "uuid": "5507ba60-c2ac-5de4-8492-beede90c8aab", 00:20:51.974 "is_configured": true, 00:20:51.974 "data_offset": 0, 00:20:51.974 "data_size": 65536 00:20:51.974 }, 00:20:51.974 { 00:20:51.974 "name": "BaseBdev2", 00:20:51.974 "uuid": "f6771b91-0a08-4a36-806a-0840986a0521", 00:20:51.974 "is_configured": true, 00:20:51.974 "data_offset": 0, 00:20:51.974 "data_size": 65536 00:20:51.974 } 00:20:51.974 ] 00:20:51.974 }' 00:20:51.974 13:06:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.974 13:06:10 -- common/autotest_common.sh@10 -- # set +x 00:20:52.541 13:06:11 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:52.801 [2024-06-11 13:06:11.459738] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.801 [2024-06-11 13:06:11.459802] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.801 00:20:52.801 Latency(us) 00:20:52.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.801 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:52.801 raid_bdev1 : 11.75 115.62 346.86 0.00 0.00 12093.88 303.48 113913.48 00:20:52.801 =================================================================================================================== 00:20:52.801 Total : 115.62 346.86 0.00 0.00 12093.88 303.48 113913.48 00:20:52.801 [2024-06-11 13:06:11.522543] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.801 [2024-06-11 13:06:11.522587] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.801 [2024-06-11 13:06:11.522663] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.801 [2024-06-11 13:06:11.522676] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:20:52.801 0 00:20:52.801 13:06:11 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.801 13:06:11 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:53.060 13:06:11 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:53.060 13:06:11 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:53.060 13:06:11 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@12 -- # local i 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.060 13:06:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:53.319 /dev/nbd0 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:53.319 13:06:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:53.319 13:06:12 -- common/autotest_common.sh@857 -- # local i 00:20:53.319 13:06:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:53.319 13:06:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:53.319 13:06:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:53.319 13:06:12 -- common/autotest_common.sh@861 -- # break 00:20:53.319 13:06:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:53.319 13:06:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:53.319 13:06:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.319 1+0 records in 00:20:53.319 1+0 records out 00:20:53.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484833 s, 8.4 MB/s 00:20:53.319 13:06:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.319 13:06:12 -- common/autotest_common.sh@874 -- # size=4096 00:20:53.319 13:06:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.319 13:06:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:53.319 13:06:12 -- common/autotest_common.sh@877 -- # return 0 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.319 13:06:12 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:53.319 13:06:12 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:53.319 13:06:12 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@12 -- # local i 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.319 13:06:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:53.577 /dev/nbd1 00:20:53.577 13:06:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:53.577 13:06:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:53.577 13:06:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:53.577 13:06:12 -- common/autotest_common.sh@857 -- # local i 00:20:53.577 13:06:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:53.577 13:06:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:53.577 13:06:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:53.577 13:06:12 -- common/autotest_common.sh@861 -- # break 00:20:53.577 13:06:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:53.577 13:06:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:53.577 13:06:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.577 1+0 records in 00:20:53.577 1+0 records out 00:20:53.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270557 s, 15.1 MB/s 00:20:53.577 13:06:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.577 13:06:12 -- common/autotest_common.sh@874 -- # size=4096 00:20:53.577 13:06:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.577 13:06:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:53.577 13:06:12 -- common/autotest_common.sh@877 -- # return 0 00:20:53.577 13:06:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:53.577 13:06:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.577 13:06:12 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:53.835 13:06:12 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@51 -- # local i 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:53.835 13:06:12 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@41 -- # break 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.094 13:06:12 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@51 -- # local i 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.094 13:06:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:54.352 13:06:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.352 13:06:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.352 13:06:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.352 13:06:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.352 13:06:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.352 13:06:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.352 13:06:12 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:54.352 13:06:13 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:54.352 13:06:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.352 13:06:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.352 13:06:13 -- bdev/nbd_common.sh@41 -- # break 00:20:54.352 13:06:13 -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.352 13:06:13 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:54.352 13:06:13 -- bdev/bdev_raid.sh@709 -- # killprocess 126896 00:20:54.352 13:06:13 -- common/autotest_common.sh@926 -- # '[' -z 126896 ']' 00:20:54.352 13:06:13 -- common/autotest_common.sh@930 -- # kill -0 126896 00:20:54.352 13:06:13 -- common/autotest_common.sh@931 -- # uname 00:20:54.352 13:06:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:54.352 13:06:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126896 00:20:54.352 killing process with pid 126896 00:20:54.352 Received shutdown signal, test time was about 13.353309 seconds 00:20:54.352 00:20:54.352 Latency(us) 00:20:54.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.352 =================================================================================================================== 00:20:54.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.352 13:06:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:54.352 13:06:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:54.352 13:06:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126896' 00:20:54.352 13:06:13 -- common/autotest_common.sh@945 -- # kill 126896 00:20:54.352 13:06:13 -- common/autotest_common.sh@950 -- # wait 126896 00:20:54.352 [2024-06-11 13:06:13.106840] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:54.611 [2024-06-11 13:06:13.268846] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:55.607 ************************************ 00:20:55.608 END TEST raid_rebuild_test_io 00:20:55.608 ************************************ 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:55.608 00:20:55.608 real 0m18.248s 00:20:55.608 user 0m27.882s 00:20:55.608 sys 0m1.712s 00:20:55.608 13:06:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.608 13:06:14 -- common/autotest_common.sh@10 -- # set +x 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:55.608 13:06:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:55.608 13:06:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:55.608 13:06:14 -- common/autotest_common.sh@10 -- # set +x 00:20:55.608 ************************************ 00:20:55.608 START TEST raid_rebuild_test_sb_io 00:20:55.608 ************************************ 00:20:55.608 13:06:14 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=127409 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127409 /var/tmp/spdk-raid.sock 00:20:55.608 13:06:14 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:55.608 13:06:14 -- common/autotest_common.sh@819 -- # '[' -z 127409 ']' 00:20:55.608 13:06:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:55.608 13:06:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:55.608 13:06:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:55.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:55.608 13:06:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:55.608 13:06:14 -- common/autotest_common.sh@10 -- # set +x 00:20:55.866 [2024-06-11 13:06:14.460539] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:55.866 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:55.866 Zero copy mechanism will not be used. 00:20:55.866 [2024-06-11 13:06:14.460754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127409 ] 00:20:55.867 [2024-06-11 13:06:14.632047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.125 [2024-06-11 13:06:14.834066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.383 [2024-06-11 13:06:15.020786] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.642 13:06:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:56.642 13:06:15 -- common/autotest_common.sh@852 -- # return 0 00:20:56.642 13:06:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:56.642 13:06:15 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:56.642 13:06:15 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:56.901 BaseBdev1_malloc 00:20:56.901 13:06:15 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:57.159 [2024-06-11 13:06:15.836232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:57.159 [2024-06-11 13:06:15.836342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.159 [2024-06-11 13:06:15.836380] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:57.159 [2024-06-11 13:06:15.836432] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.160 [2024-06-11 13:06:15.838770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.160 [2024-06-11 13:06:15.838821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:57.160 BaseBdev1 00:20:57.160 13:06:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:57.160 13:06:15 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:57.160 13:06:15 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:57.418 BaseBdev2_malloc 00:20:57.418 13:06:16 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:57.677 [2024-06-11 13:06:16.337588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:57.677 [2024-06-11 13:06:16.337672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.677 [2024-06-11 13:06:16.337720] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:57.677 [2024-06-11 13:06:16.337780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.677 [2024-06-11 13:06:16.340088] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.677 [2024-06-11 13:06:16.340146] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:57.677 BaseBdev2 00:20:57.677 13:06:16 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:57.936 spare_malloc 00:20:57.936 13:06:16 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:58.195 spare_delay 00:20:58.195 13:06:16 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:58.195 [2024-06-11 13:06:16.983505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:58.195 [2024-06-11 13:06:16.983576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.195 [2024-06-11 13:06:16.983621] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:58.195 [2024-06-11 13:06:16.983668] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.195 [2024-06-11 13:06:16.985921] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.195 [2024-06-11 13:06:16.985979] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:58.195 spare 00:20:58.195 13:06:16 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:58.454 [2024-06-11 13:06:17.163607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:58.454 [2024-06-11 13:06:17.165531] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:58.454 [2024-06-11 13:06:17.165738] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:58.454 [2024-06-11 13:06:17.165754] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:58.454 [2024-06-11 13:06:17.165897] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:58.454 [2024-06-11 13:06:17.166279] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:58.454 [2024-06-11 13:06:17.166312] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:58.454 [2024-06-11 13:06:17.166443] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.454 13:06:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.712 13:06:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.712 "name": "raid_bdev1", 00:20:58.712 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:20:58.712 "strip_size_kb": 0, 00:20:58.712 "state": "online", 00:20:58.712 "raid_level": "raid1", 00:20:58.712 "superblock": true, 00:20:58.712 "num_base_bdevs": 2, 00:20:58.713 "num_base_bdevs_discovered": 2, 00:20:58.713 "num_base_bdevs_operational": 2, 00:20:58.713 "base_bdevs_list": [ 00:20:58.713 { 00:20:58.713 "name": "BaseBdev1", 00:20:58.713 "uuid": "2e857f7d-ffc4-5c8c-9779-b15af7571b23", 00:20:58.713 "is_configured": true, 00:20:58.713 "data_offset": 2048, 00:20:58.713 "data_size": 63488 00:20:58.713 }, 00:20:58.713 { 00:20:58.713 "name": "BaseBdev2", 00:20:58.713 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:20:58.713 "is_configured": true, 00:20:58.713 "data_offset": 2048, 00:20:58.713 "data_size": 63488 00:20:58.713 } 00:20:58.713 ] 00:20:58.713 }' 00:20:58.713 13:06:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.713 13:06:17 -- common/autotest_common.sh@10 -- # set +x 00:20:59.279 13:06:18 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:59.279 13:06:18 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:59.537 [2024-06-11 13:06:18.315931] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.537 13:06:18 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:59.537 13:06:18 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.537 13:06:18 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:59.795 13:06:18 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:59.796 13:06:18 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:59.796 13:06:18 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:59.796 13:06:18 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:00.054 [2024-06-11 13:06:18.663180] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:00.054 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:00.054 Zero copy mechanism will not be used. 00:21:00.054 Running I/O for 60 seconds... 00:21:00.054 [2024-06-11 13:06:18.754314] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:00.054 [2024-06-11 13:06:18.754544] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.054 13:06:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.312 13:06:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.312 "name": "raid_bdev1", 00:21:00.312 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:00.312 "strip_size_kb": 0, 00:21:00.312 "state": "online", 00:21:00.312 "raid_level": "raid1", 00:21:00.312 "superblock": true, 00:21:00.312 "num_base_bdevs": 2, 00:21:00.312 "num_base_bdevs_discovered": 1, 00:21:00.312 "num_base_bdevs_operational": 1, 00:21:00.312 "base_bdevs_list": [ 00:21:00.312 { 00:21:00.312 "name": null, 00:21:00.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.312 "is_configured": false, 00:21:00.312 "data_offset": 2048, 00:21:00.312 "data_size": 63488 00:21:00.312 }, 00:21:00.312 { 00:21:00.312 "name": "BaseBdev2", 00:21:00.312 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:00.312 "is_configured": true, 00:21:00.312 "data_offset": 2048, 00:21:00.312 "data_size": 63488 00:21:00.312 } 00:21:00.312 ] 00:21:00.312 }' 00:21:00.312 13:06:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.312 13:06:19 -- common/autotest_common.sh@10 -- # set +x 00:21:00.879 13:06:19 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.137 [2024-06-11 13:06:19.881817] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:01.137 [2024-06-11 13:06:19.881898] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.137 13:06:19 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:01.137 [2024-06-11 13:06:19.928035] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:01.137 [2024-06-11 13:06:19.930058] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.395 [2024-06-11 13:06:20.038088] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.395 [2024-06-11 13:06:20.038539] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.395 [2024-06-11 13:06:20.159283] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:01.395 [2024-06-11 13:06:20.159564] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:01.654 [2024-06-11 13:06:20.387685] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:01.654 [2024-06-11 13:06:20.388025] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:01.912 [2024-06-11 13:06:20.592132] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:02.170 [2024-06-11 13:06:20.808591] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:02.170 [2024-06-11 13:06:20.808922] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:02.170 13:06:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.170 13:06:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.170 13:06:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.170 13:06:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.170 13:06:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.170 13:06:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.170 13:06:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.428 [2024-06-11 13:06:21.018338] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:02.428 13:06:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:02.428 "name": "raid_bdev1", 00:21:02.428 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:02.428 "strip_size_kb": 0, 00:21:02.428 "state": "online", 00:21:02.428 "raid_level": "raid1", 00:21:02.428 "superblock": true, 00:21:02.428 "num_base_bdevs": 2, 00:21:02.428 "num_base_bdevs_discovered": 2, 00:21:02.428 "num_base_bdevs_operational": 2, 00:21:02.428 "process": { 00:21:02.428 "type": "rebuild", 00:21:02.428 "target": "spare", 00:21:02.428 "progress": { 00:21:02.428 "blocks": 18432, 00:21:02.428 "percent": 29 00:21:02.428 } 00:21:02.428 }, 00:21:02.428 "base_bdevs_list": [ 00:21:02.428 { 00:21:02.428 "name": "spare", 00:21:02.428 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:02.428 "is_configured": true, 00:21:02.428 "data_offset": 2048, 00:21:02.428 "data_size": 63488 00:21:02.428 }, 00:21:02.428 { 00:21:02.428 "name": "BaseBdev2", 00:21:02.428 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:02.428 "is_configured": true, 00:21:02.428 "data_offset": 2048, 00:21:02.428 "data_size": 63488 00:21:02.428 } 00:21:02.428 ] 00:21:02.428 }' 00:21:02.428 13:06:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:02.428 13:06:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.428 13:06:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:02.686 13:06:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.686 13:06:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:02.686 [2024-06-11 13:06:21.506108] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:02.945 [2024-06-11 13:06:21.577981] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:02.945 [2024-06-11 13:06:21.578412] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:02.945 [2024-06-11 13:06:21.579127] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:02.945 [2024-06-11 13:06:21.592428] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.945 [2024-06-11 13:06:21.625666] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.945 13:06:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.203 13:06:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:03.203 "name": "raid_bdev1", 00:21:03.203 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:03.203 "strip_size_kb": 0, 00:21:03.203 "state": "online", 00:21:03.203 "raid_level": "raid1", 00:21:03.203 "superblock": true, 00:21:03.203 "num_base_bdevs": 2, 00:21:03.203 "num_base_bdevs_discovered": 1, 00:21:03.203 "num_base_bdevs_operational": 1, 00:21:03.203 "base_bdevs_list": [ 00:21:03.203 { 00:21:03.203 "name": null, 00:21:03.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.203 "is_configured": false, 00:21:03.203 "data_offset": 2048, 00:21:03.203 "data_size": 63488 00:21:03.203 }, 00:21:03.203 { 00:21:03.203 "name": "BaseBdev2", 00:21:03.203 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:03.203 "is_configured": true, 00:21:03.203 "data_offset": 2048, 00:21:03.203 "data_size": 63488 00:21:03.203 } 00:21:03.203 ] 00:21:03.203 }' 00:21:03.203 13:06:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:03.203 13:06:21 -- common/autotest_common.sh@10 -- # set +x 00:21:03.773 13:06:22 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.773 13:06:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:03.773 13:06:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:03.773 13:06:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:03.773 13:06:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:03.773 13:06:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.773 13:06:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.031 13:06:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:04.031 "name": "raid_bdev1", 00:21:04.031 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:04.031 "strip_size_kb": 0, 00:21:04.031 "state": "online", 00:21:04.031 "raid_level": "raid1", 00:21:04.031 "superblock": true, 00:21:04.031 "num_base_bdevs": 2, 00:21:04.031 "num_base_bdevs_discovered": 1, 00:21:04.031 "num_base_bdevs_operational": 1, 00:21:04.031 "base_bdevs_list": [ 00:21:04.031 { 00:21:04.031 "name": null, 00:21:04.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.031 "is_configured": false, 00:21:04.031 "data_offset": 2048, 00:21:04.031 "data_size": 63488 00:21:04.031 }, 00:21:04.031 { 00:21:04.031 "name": "BaseBdev2", 00:21:04.031 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:04.031 "is_configured": true, 00:21:04.031 "data_offset": 2048, 00:21:04.031 "data_size": 63488 00:21:04.031 } 00:21:04.031 ] 00:21:04.031 }' 00:21:04.031 13:06:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:04.031 13:06:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:04.031 13:06:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:04.289 13:06:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:04.289 13:06:22 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:04.289 [2024-06-11 13:06:23.073805] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:04.289 [2024-06-11 13:06:23.073888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.289 13:06:23 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:04.289 [2024-06-11 13:06:23.121847] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:04.289 [2024-06-11 13:06:23.123934] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.547 [2024-06-11 13:06:23.244074] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:04.547 [2024-06-11 13:06:23.244637] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:04.805 [2024-06-11 13:06:23.466537] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:04.805 [2024-06-11 13:06:23.466768] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:05.063 [2024-06-11 13:06:23.794789] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:05.063 [2024-06-11 13:06:23.795039] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:05.321 [2024-06-11 13:06:24.003658] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:05.321 [2024-06-11 13:06:24.003859] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:05.321 13:06:24 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.321 13:06:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.321 13:06:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:05.321 13:06:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:05.321 13:06:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.321 13:06:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.321 13:06:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.580 13:06:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.580 "name": "raid_bdev1", 00:21:05.580 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:05.580 "strip_size_kb": 0, 00:21:05.580 "state": "online", 00:21:05.580 "raid_level": "raid1", 00:21:05.580 "superblock": true, 00:21:05.580 "num_base_bdevs": 2, 00:21:05.580 "num_base_bdevs_discovered": 2, 00:21:05.580 "num_base_bdevs_operational": 2, 00:21:05.580 "process": { 00:21:05.580 "type": "rebuild", 00:21:05.580 "target": "spare", 00:21:05.580 "progress": { 00:21:05.580 "blocks": 12288, 00:21:05.580 "percent": 19 00:21:05.580 } 00:21:05.580 }, 00:21:05.580 "base_bdevs_list": [ 00:21:05.580 { 00:21:05.580 "name": "spare", 00:21:05.580 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:05.580 "is_configured": true, 00:21:05.580 "data_offset": 2048, 00:21:05.580 "data_size": 63488 00:21:05.580 }, 00:21:05.580 { 00:21:05.580 "name": "BaseBdev2", 00:21:05.580 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:05.580 "is_configured": true, 00:21:05.581 "data_offset": 2048, 00:21:05.581 "data_size": 63488 00:21:05.581 } 00:21:05.581 ] 00:21:05.581 }' 00:21:05.581 13:06:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.581 13:06:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.581 13:06:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:05.839 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@657 -- # local timeout=456 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.839 [2024-06-11 13:06:24.449548] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:05.839 13:06:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.839 "name": "raid_bdev1", 00:21:05.839 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:05.839 "strip_size_kb": 0, 00:21:05.839 "state": "online", 00:21:05.839 "raid_level": "raid1", 00:21:05.839 "superblock": true, 00:21:05.839 "num_base_bdevs": 2, 00:21:05.839 "num_base_bdevs_discovered": 2, 00:21:05.839 "num_base_bdevs_operational": 2, 00:21:05.839 "process": { 00:21:05.839 "type": "rebuild", 00:21:05.839 "target": "spare", 00:21:05.839 "progress": { 00:21:05.839 "blocks": 16384, 00:21:05.839 "percent": 25 00:21:05.839 } 00:21:05.839 }, 00:21:05.839 "base_bdevs_list": [ 00:21:05.839 { 00:21:05.839 "name": "spare", 00:21:05.839 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:05.839 "is_configured": true, 00:21:05.839 "data_offset": 2048, 00:21:05.839 "data_size": 63488 00:21:05.839 }, 00:21:05.839 { 00:21:05.839 "name": "BaseBdev2", 00:21:05.839 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:05.839 "is_configured": true, 00:21:05.839 "data_offset": 2048, 00:21:05.839 "data_size": 63488 00:21:05.839 } 00:21:05.839 ] 00:21:05.839 }' 00:21:06.098 13:06:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.098 13:06:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.098 13:06:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.098 13:06:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.098 13:06:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:06.098 [2024-06-11 13:06:24.896584] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:06.357 [2024-06-11 13:06:25.125592] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:06.615 [2024-06-11 13:06:25.245696] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:06.615 [2024-06-11 13:06:25.245924] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:07.182 13:06:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:07.182 13:06:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.182 13:06:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:07.182 13:06:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:07.182 13:06:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:07.182 13:06:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:07.182 13:06:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.182 13:06:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.441 13:06:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:07.441 "name": "raid_bdev1", 00:21:07.441 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:07.441 "strip_size_kb": 0, 00:21:07.441 "state": "online", 00:21:07.441 "raid_level": "raid1", 00:21:07.441 "superblock": true, 00:21:07.441 "num_base_bdevs": 2, 00:21:07.441 "num_base_bdevs_discovered": 2, 00:21:07.441 "num_base_bdevs_operational": 2, 00:21:07.441 "process": { 00:21:07.441 "type": "rebuild", 00:21:07.441 "target": "spare", 00:21:07.441 "progress": { 00:21:07.441 "blocks": 38912, 00:21:07.441 "percent": 61 00:21:07.441 } 00:21:07.441 }, 00:21:07.441 "base_bdevs_list": [ 00:21:07.441 { 00:21:07.441 "name": "spare", 00:21:07.441 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:07.441 "is_configured": true, 00:21:07.441 "data_offset": 2048, 00:21:07.441 "data_size": 63488 00:21:07.441 }, 00:21:07.441 { 00:21:07.441 "name": "BaseBdev2", 00:21:07.441 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:07.441 "is_configured": true, 00:21:07.441 "data_offset": 2048, 00:21:07.441 "data_size": 63488 00:21:07.441 } 00:21:07.441 ] 00:21:07.441 }' 00:21:07.441 13:06:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:07.441 13:06:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.441 13:06:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.441 13:06:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.441 13:06:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:07.441 [2024-06-11 13:06:26.255910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:07.709 [2024-06-11 13:06:26.357569] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:08.331 13:06:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:08.331 13:06:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.331 13:06:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.331 13:06:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:08.331 13:06:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:08.331 13:06:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.332 13:06:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.332 13:06:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.590 [2024-06-11 13:06:27.248315] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:08.590 [2024-06-11 13:06:27.354219] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:08.590 [2024-06-11 13:06:27.357184] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.590 13:06:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:08.590 "name": "raid_bdev1", 00:21:08.590 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:08.590 "strip_size_kb": 0, 00:21:08.590 "state": "online", 00:21:08.590 "raid_level": "raid1", 00:21:08.590 "superblock": true, 00:21:08.590 "num_base_bdevs": 2, 00:21:08.590 "num_base_bdevs_discovered": 2, 00:21:08.590 "num_base_bdevs_operational": 2, 00:21:08.590 "base_bdevs_list": [ 00:21:08.590 { 00:21:08.590 "name": "spare", 00:21:08.590 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:08.590 "is_configured": true, 00:21:08.590 "data_offset": 2048, 00:21:08.590 "data_size": 63488 00:21:08.590 }, 00:21:08.590 { 00:21:08.590 "name": "BaseBdev2", 00:21:08.590 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:08.590 "is_configured": true, 00:21:08.590 "data_offset": 2048, 00:21:08.590 "data_size": 63488 00:21:08.590 } 00:21:08.590 ] 00:21:08.590 }' 00:21:08.590 13:06:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@660 -- # break 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.850 13:06:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:09.108 "name": "raid_bdev1", 00:21:09.108 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:09.108 "strip_size_kb": 0, 00:21:09.108 "state": "online", 00:21:09.108 "raid_level": "raid1", 00:21:09.108 "superblock": true, 00:21:09.108 "num_base_bdevs": 2, 00:21:09.108 "num_base_bdevs_discovered": 2, 00:21:09.108 "num_base_bdevs_operational": 2, 00:21:09.108 "base_bdevs_list": [ 00:21:09.108 { 00:21:09.108 "name": "spare", 00:21:09.108 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:09.108 "is_configured": true, 00:21:09.108 "data_offset": 2048, 00:21:09.108 "data_size": 63488 00:21:09.108 }, 00:21:09.108 { 00:21:09.108 "name": "BaseBdev2", 00:21:09.108 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:09.108 "is_configured": true, 00:21:09.108 "data_offset": 2048, 00:21:09.108 "data_size": 63488 00:21:09.108 } 00:21:09.108 ] 00:21:09.108 }' 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.108 13:06:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.367 13:06:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:09.367 "name": "raid_bdev1", 00:21:09.367 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:09.367 "strip_size_kb": 0, 00:21:09.367 "state": "online", 00:21:09.367 "raid_level": "raid1", 00:21:09.367 "superblock": true, 00:21:09.367 "num_base_bdevs": 2, 00:21:09.367 "num_base_bdevs_discovered": 2, 00:21:09.367 "num_base_bdevs_operational": 2, 00:21:09.367 "base_bdevs_list": [ 00:21:09.367 { 00:21:09.367 "name": "spare", 00:21:09.367 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:09.367 "is_configured": true, 00:21:09.367 "data_offset": 2048, 00:21:09.367 "data_size": 63488 00:21:09.367 }, 00:21:09.367 { 00:21:09.367 "name": "BaseBdev2", 00:21:09.367 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:09.367 "is_configured": true, 00:21:09.367 "data_offset": 2048, 00:21:09.367 "data_size": 63488 00:21:09.367 } 00:21:09.367 ] 00:21:09.367 }' 00:21:09.367 13:06:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:09.367 13:06:28 -- common/autotest_common.sh@10 -- # set +x 00:21:09.934 13:06:28 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:10.193 [2024-06-11 13:06:28.888713] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:10.193 [2024-06-11 13:06:28.888773] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:10.193 00:21:10.193 Latency(us) 00:21:10.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.193 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:10.193 raid_bdev1 : 10.31 130.68 392.05 0.00 0.00 10173.17 307.20 114866.73 00:21:10.193 =================================================================================================================== 00:21:10.193 Total : 130.68 392.05 0.00 0.00 10173.17 307.20 114866.73 00:21:10.193 [2024-06-11 13:06:28.988796] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.193 [2024-06-11 13:06:28.988843] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.193 0 00:21:10.193 [2024-06-11 13:06:28.988951] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:10.193 [2024-06-11 13:06:28.988967] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:10.193 13:06:28 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.193 13:06:28 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:10.452 13:06:29 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:10.452 13:06:29 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:10.452 13:06:29 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@12 -- # local i 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:10.452 13:06:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:10.711 /dev/nbd0 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:10.711 13:06:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:10.711 13:06:29 -- common/autotest_common.sh@857 -- # local i 00:21:10.711 13:06:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:10.711 13:06:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:10.711 13:06:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:10.711 13:06:29 -- common/autotest_common.sh@861 -- # break 00:21:10.711 13:06:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:10.711 13:06:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:10.711 13:06:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:10.711 1+0 records in 00:21:10.711 1+0 records out 00:21:10.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722803 s, 5.7 MB/s 00:21:10.711 13:06:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.711 13:06:29 -- common/autotest_common.sh@874 -- # size=4096 00:21:10.711 13:06:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.711 13:06:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:10.711 13:06:29 -- common/autotest_common.sh@877 -- # return 0 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:10.711 13:06:29 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:10.711 13:06:29 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:10.711 13:06:29 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@12 -- # local i 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:10.711 13:06:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:10.971 /dev/nbd1 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:10.971 13:06:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:10.971 13:06:29 -- common/autotest_common.sh@857 -- # local i 00:21:10.971 13:06:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:10.971 13:06:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:10.971 13:06:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:10.971 13:06:29 -- common/autotest_common.sh@861 -- # break 00:21:10.971 13:06:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:10.971 13:06:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:10.971 13:06:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:10.971 1+0 records in 00:21:10.971 1+0 records out 00:21:10.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424454 s, 9.7 MB/s 00:21:10.971 13:06:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.971 13:06:29 -- common/autotest_common.sh@874 -- # size=4096 00:21:10.971 13:06:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.971 13:06:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:10.971 13:06:29 -- common/autotest_common.sh@877 -- # return 0 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:10.971 13:06:29 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:10.971 13:06:29 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@51 -- # local i 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:10.971 13:06:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@41 -- # break 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@45 -- # return 0 00:21:11.539 13:06:30 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@51 -- # local i 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:11.539 13:06:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@41 -- # break 00:21:11.798 13:06:30 -- bdev/nbd_common.sh@45 -- # return 0 00:21:11.798 13:06:30 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:11.798 13:06:30 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:11.798 13:06:30 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:11.798 13:06:30 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:12.056 13:06:30 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:12.315 [2024-06-11 13:06:31.046315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:12.315 [2024-06-11 13:06:31.046414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.315 [2024-06-11 13:06:31.046456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:12.315 [2024-06-11 13:06:31.046499] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.315 [2024-06-11 13:06:31.048946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.315 [2024-06-11 13:06:31.049022] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:12.315 [2024-06-11 13:06:31.049150] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:12.315 [2024-06-11 13:06:31.049240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.315 BaseBdev1 00:21:12.315 13:06:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:12.315 13:06:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:12.315 13:06:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:12.574 13:06:31 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:12.832 [2024-06-11 13:06:31.438403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:12.832 [2024-06-11 13:06:31.438469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.832 [2024-06-11 13:06:31.438503] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:12.832 [2024-06-11 13:06:31.438532] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.832 [2024-06-11 13:06:31.438927] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.832 [2024-06-11 13:06:31.438992] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:12.832 [2024-06-11 13:06:31.439084] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:12.832 [2024-06-11 13:06:31.439100] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:12.832 [2024-06-11 13:06:31.439108] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:12.832 [2024-06-11 13:06:31.439125] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:21:12.832 [2024-06-11 13:06:31.439192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:12.832 BaseBdev2 00:21:12.832 13:06:31 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:12.832 13:06:31 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:13.091 [2024-06-11 13:06:31.830524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:13.091 [2024-06-11 13:06:31.830581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.091 [2024-06-11 13:06:31.830620] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:13.091 [2024-06-11 13:06:31.830643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.091 [2024-06-11 13:06:31.831091] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.091 [2024-06-11 13:06:31.831147] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:13.091 [2024-06-11 13:06:31.831260] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:13.091 [2024-06-11 13:06:31.831293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:13.091 spare 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.091 13:06:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.092 13:06:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.350 [2024-06-11 13:06:31.931393] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:21:13.350 [2024-06-11 13:06:31.931417] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:13.350 [2024-06-11 13:06:31.931557] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cee0 00:21:13.350 [2024-06-11 13:06:31.931966] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:21:13.350 [2024-06-11 13:06:31.931990] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:21:13.350 [2024-06-11 13:06:31.932124] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.350 13:06:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:13.350 "name": "raid_bdev1", 00:21:13.350 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:13.350 "strip_size_kb": 0, 00:21:13.350 "state": "online", 00:21:13.350 "raid_level": "raid1", 00:21:13.350 "superblock": true, 00:21:13.350 "num_base_bdevs": 2, 00:21:13.350 "num_base_bdevs_discovered": 2, 00:21:13.350 "num_base_bdevs_operational": 2, 00:21:13.350 "base_bdevs_list": [ 00:21:13.350 { 00:21:13.350 "name": "spare", 00:21:13.350 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:13.350 "is_configured": true, 00:21:13.350 "data_offset": 2048, 00:21:13.350 "data_size": 63488 00:21:13.350 }, 00:21:13.350 { 00:21:13.350 "name": "BaseBdev2", 00:21:13.350 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:13.350 "is_configured": true, 00:21:13.350 "data_offset": 2048, 00:21:13.350 "data_size": 63488 00:21:13.350 } 00:21:13.350 ] 00:21:13.350 }' 00:21:13.350 13:06:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:13.350 13:06:32 -- common/autotest_common.sh@10 -- # set +x 00:21:13.916 13:06:32 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.916 13:06:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.916 13:06:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:13.916 13:06:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:13.916 13:06:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.916 13:06:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.916 13:06:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.173 13:06:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:14.173 "name": "raid_bdev1", 00:21:14.173 "uuid": "030ee27b-162e-4a54-be5a-bb4c2463a121", 00:21:14.173 "strip_size_kb": 0, 00:21:14.173 "state": "online", 00:21:14.173 "raid_level": "raid1", 00:21:14.173 "superblock": true, 00:21:14.173 "num_base_bdevs": 2, 00:21:14.173 "num_base_bdevs_discovered": 2, 00:21:14.173 "num_base_bdevs_operational": 2, 00:21:14.173 "base_bdevs_list": [ 00:21:14.174 { 00:21:14.174 "name": "spare", 00:21:14.174 "uuid": "6708ba17-13cc-5d4d-a8d6-62fb29f40a5e", 00:21:14.174 "is_configured": true, 00:21:14.174 "data_offset": 2048, 00:21:14.174 "data_size": 63488 00:21:14.174 }, 00:21:14.174 { 00:21:14.174 "name": "BaseBdev2", 00:21:14.174 "uuid": "8424ba8a-761d-5406-a110-2aff70970a4d", 00:21:14.174 "is_configured": true, 00:21:14.174 "data_offset": 2048, 00:21:14.174 "data_size": 63488 00:21:14.174 } 00:21:14.174 ] 00:21:14.174 }' 00:21:14.174 13:06:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:14.431 13:06:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:14.431 13:06:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:14.431 13:06:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:14.431 13:06:33 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.431 13:06:33 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:14.688 13:06:33 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.688 13:06:33 -- bdev/bdev_raid.sh@709 -- # killprocess 127409 00:21:14.688 13:06:33 -- common/autotest_common.sh@926 -- # '[' -z 127409 ']' 00:21:14.688 13:06:33 -- common/autotest_common.sh@930 -- # kill -0 127409 00:21:14.688 13:06:33 -- common/autotest_common.sh@931 -- # uname 00:21:14.688 13:06:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:14.688 13:06:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127409 00:21:14.688 killing process with pid 127409 00:21:14.688 Received shutdown signal, test time was about 14.643607 seconds 00:21:14.688 00:21:14.688 Latency(us) 00:21:14.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.688 =================================================================================================================== 00:21:14.688 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.688 13:06:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:14.688 13:06:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:14.688 13:06:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127409' 00:21:14.688 13:06:33 -- common/autotest_common.sh@945 -- # kill 127409 00:21:14.688 13:06:33 -- common/autotest_common.sh@950 -- # wait 127409 00:21:14.688 [2024-06-11 13:06:33.308962] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:14.688 [2024-06-11 13:06:33.309103] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.688 [2024-06-11 13:06:33.309191] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.688 [2024-06-11 13:06:33.309211] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:21:14.689 [2024-06-11 13:06:33.468444] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:16.061 ************************************ 00:21:16.061 END TEST raid_rebuild_test_sb_io 00:21:16.061 ************************************ 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:16.061 00:21:16.061 real 0m20.163s 00:21:16.061 user 0m32.417s 00:21:16.061 sys 0m2.202s 00:21:16.061 13:06:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:16.061 13:06:34 -- common/autotest_common.sh@10 -- # set +x 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:16.061 13:06:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:16.061 13:06:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:16.061 13:06:34 -- common/autotest_common.sh@10 -- # set +x 00:21:16.061 ************************************ 00:21:16.061 START TEST raid_rebuild_test 00:21:16.061 ************************************ 00:21:16.061 13:06:34 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.061 13:06:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@544 -- # raid_pid=127995 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127995 /var/tmp/spdk-raid.sock 00:21:16.062 13:06:34 -- common/autotest_common.sh@819 -- # '[' -z 127995 ']' 00:21:16.062 13:06:34 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:16.062 13:06:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:16.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:16.062 13:06:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:16.062 13:06:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:16.062 13:06:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:16.062 13:06:34 -- common/autotest_common.sh@10 -- # set +x 00:21:16.062 [2024-06-11 13:06:34.684755] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:16.062 [2024-06-11 13:06:34.684938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127995 ] 00:21:16.062 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:16.062 Zero copy mechanism will not be used. 00:21:16.062 [2024-06-11 13:06:34.847906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.319 [2024-06-11 13:06:35.031350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.577 [2024-06-11 13:06:35.218955] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.836 13:06:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:16.836 13:06:35 -- common/autotest_common.sh@852 -- # return 0 00:21:16.836 13:06:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:16.836 13:06:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:16.836 13:06:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:17.095 BaseBdev1 00:21:17.095 13:06:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:17.095 13:06:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:17.095 13:06:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:17.354 BaseBdev2 00:21:17.354 13:06:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:17.354 13:06:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:17.354 13:06:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:17.613 BaseBdev3 00:21:17.613 13:06:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:17.613 13:06:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:17.613 13:06:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:17.871 BaseBdev4 00:21:17.871 13:06:36 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:18.130 spare_malloc 00:21:18.130 13:06:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:18.388 spare_delay 00:21:18.388 13:06:37 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:18.646 [2024-06-11 13:06:37.284937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:18.646 [2024-06-11 13:06:37.285032] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.646 [2024-06-11 13:06:37.285068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:18.646 [2024-06-11 13:06:37.285111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.646 [2024-06-11 13:06:37.287347] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.646 [2024-06-11 13:06:37.287398] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:18.646 spare 00:21:18.646 13:06:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:18.646 [2024-06-11 13:06:37.473005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.646 [2024-06-11 13:06:37.474893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.646 [2024-06-11 13:06:37.474951] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:18.646 [2024-06-11 13:06:37.474993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:18.646 [2024-06-11 13:06:37.475072] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:18.647 [2024-06-11 13:06:37.475087] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:18.647 [2024-06-11 13:06:37.475235] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:18.647 [2024-06-11 13:06:37.475593] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:18.647 [2024-06-11 13:06:37.475617] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:18.647 [2024-06-11 13:06:37.475772] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.905 "name": "raid_bdev1", 00:21:18.905 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:18.905 "strip_size_kb": 0, 00:21:18.905 "state": "online", 00:21:18.905 "raid_level": "raid1", 00:21:18.905 "superblock": false, 00:21:18.905 "num_base_bdevs": 4, 00:21:18.905 "num_base_bdevs_discovered": 4, 00:21:18.905 "num_base_bdevs_operational": 4, 00:21:18.905 "base_bdevs_list": [ 00:21:18.905 { 00:21:18.905 "name": "BaseBdev1", 00:21:18.905 "uuid": "89a14763-af13-4a80-8453-439c822afcbf", 00:21:18.905 "is_configured": true, 00:21:18.905 "data_offset": 0, 00:21:18.905 "data_size": 65536 00:21:18.905 }, 00:21:18.905 { 00:21:18.905 "name": "BaseBdev2", 00:21:18.905 "uuid": "c8b60d84-2e6a-416c-b97e-0f1caeae9b22", 00:21:18.905 "is_configured": true, 00:21:18.905 "data_offset": 0, 00:21:18.905 "data_size": 65536 00:21:18.905 }, 00:21:18.905 { 00:21:18.905 "name": "BaseBdev3", 00:21:18.905 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:18.905 "is_configured": true, 00:21:18.905 "data_offset": 0, 00:21:18.905 "data_size": 65536 00:21:18.905 }, 00:21:18.905 { 00:21:18.905 "name": "BaseBdev4", 00:21:18.905 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:18.905 "is_configured": true, 00:21:18.905 "data_offset": 0, 00:21:18.905 "data_size": 65536 00:21:18.905 } 00:21:18.905 ] 00:21:18.905 }' 00:21:18.905 13:06:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.905 13:06:37 -- common/autotest_common.sh@10 -- # set +x 00:21:19.883 13:06:38 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:19.883 13:06:38 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:19.883 [2024-06-11 13:06:38.597457] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.883 13:06:38 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:19.883 13:06:38 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.883 13:06:38 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:20.140 13:06:38 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:20.140 13:06:38 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:20.140 13:06:38 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:20.140 13:06:38 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:20.140 13:06:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:20.140 13:06:38 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:20.141 13:06:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:20.141 13:06:38 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:20.141 13:06:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:20.141 13:06:38 -- bdev/nbd_common.sh@12 -- # local i 00:21:20.141 13:06:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:20.141 13:06:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.141 13:06:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:20.399 [2024-06-11 13:06:39.053321] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:20.399 /dev/nbd0 00:21:20.399 13:06:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:20.399 13:06:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:20.399 13:06:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:20.399 13:06:39 -- common/autotest_common.sh@857 -- # local i 00:21:20.399 13:06:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:20.399 13:06:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:20.399 13:06:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:20.399 13:06:39 -- common/autotest_common.sh@861 -- # break 00:21:20.399 13:06:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:20.399 13:06:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:20.399 13:06:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.399 1+0 records in 00:21:20.399 1+0 records out 00:21:20.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195316 s, 21.0 MB/s 00:21:20.399 13:06:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.399 13:06:39 -- common/autotest_common.sh@874 -- # size=4096 00:21:20.399 13:06:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.399 13:06:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:20.399 13:06:39 -- common/autotest_common.sh@877 -- # return 0 00:21:20.399 13:06:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.399 13:06:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.399 13:06:39 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:20.399 13:06:39 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:20.399 13:06:39 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:26.961 65536+0 records in 00:21:26.961 65536+0 records out 00:21:26.961 33554432 bytes (34 MB, 32 MiB) copied, 5.93196 s, 5.7 MB/s 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@51 -- # local i 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:26.961 [2024-06-11 13:06:45.351132] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@41 -- # break 00:21:26.961 13:06:45 -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:26.961 [2024-06-11 13:06:45.642653] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.961 13:06:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.219 13:06:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.219 "name": "raid_bdev1", 00:21:27.219 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:27.219 "strip_size_kb": 0, 00:21:27.219 "state": "online", 00:21:27.219 "raid_level": "raid1", 00:21:27.219 "superblock": false, 00:21:27.219 "num_base_bdevs": 4, 00:21:27.219 "num_base_bdevs_discovered": 3, 00:21:27.219 "num_base_bdevs_operational": 3, 00:21:27.219 "base_bdevs_list": [ 00:21:27.219 { 00:21:27.219 "name": null, 00:21:27.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.219 "is_configured": false, 00:21:27.219 "data_offset": 0, 00:21:27.219 "data_size": 65536 00:21:27.219 }, 00:21:27.219 { 00:21:27.219 "name": "BaseBdev2", 00:21:27.219 "uuid": "c8b60d84-2e6a-416c-b97e-0f1caeae9b22", 00:21:27.219 "is_configured": true, 00:21:27.219 "data_offset": 0, 00:21:27.219 "data_size": 65536 00:21:27.219 }, 00:21:27.219 { 00:21:27.219 "name": "BaseBdev3", 00:21:27.219 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:27.219 "is_configured": true, 00:21:27.219 "data_offset": 0, 00:21:27.219 "data_size": 65536 00:21:27.219 }, 00:21:27.219 { 00:21:27.219 "name": "BaseBdev4", 00:21:27.219 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:27.219 "is_configured": true, 00:21:27.219 "data_offset": 0, 00:21:27.219 "data_size": 65536 00:21:27.219 } 00:21:27.219 ] 00:21:27.219 }' 00:21:27.219 13:06:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.219 13:06:45 -- common/autotest_common.sh@10 -- # set +x 00:21:27.784 13:06:46 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:28.043 [2024-06-11 13:06:46.735483] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:28.043 [2024-06-11 13:06:46.735537] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.043 [2024-06-11 13:06:46.746192] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:21:28.043 [2024-06-11 13:06:46.748223] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.043 13:06:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:28.978 13:06:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.978 13:06:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.978 13:06:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:28.978 13:06:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:28.978 13:06:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.978 13:06:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.978 13:06:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.236 13:06:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.236 "name": "raid_bdev1", 00:21:29.236 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:29.236 "strip_size_kb": 0, 00:21:29.236 "state": "online", 00:21:29.236 "raid_level": "raid1", 00:21:29.236 "superblock": false, 00:21:29.236 "num_base_bdevs": 4, 00:21:29.236 "num_base_bdevs_discovered": 4, 00:21:29.236 "num_base_bdevs_operational": 4, 00:21:29.236 "process": { 00:21:29.236 "type": "rebuild", 00:21:29.236 "target": "spare", 00:21:29.236 "progress": { 00:21:29.236 "blocks": 24576, 00:21:29.236 "percent": 37 00:21:29.236 } 00:21:29.236 }, 00:21:29.236 "base_bdevs_list": [ 00:21:29.236 { 00:21:29.236 "name": "spare", 00:21:29.236 "uuid": "837b679c-1b2a-56f8-9d37-dabb1e0938d9", 00:21:29.236 "is_configured": true, 00:21:29.236 "data_offset": 0, 00:21:29.236 "data_size": 65536 00:21:29.236 }, 00:21:29.236 { 00:21:29.236 "name": "BaseBdev2", 00:21:29.236 "uuid": "c8b60d84-2e6a-416c-b97e-0f1caeae9b22", 00:21:29.236 "is_configured": true, 00:21:29.236 "data_offset": 0, 00:21:29.236 "data_size": 65536 00:21:29.236 }, 00:21:29.236 { 00:21:29.236 "name": "BaseBdev3", 00:21:29.236 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:29.236 "is_configured": true, 00:21:29.236 "data_offset": 0, 00:21:29.236 "data_size": 65536 00:21:29.236 }, 00:21:29.236 { 00:21:29.236 "name": "BaseBdev4", 00:21:29.236 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:29.236 "is_configured": true, 00:21:29.236 "data_offset": 0, 00:21:29.236 "data_size": 65536 00:21:29.236 } 00:21:29.236 ] 00:21:29.236 }' 00:21:29.236 13:06:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.236 13:06:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.236 13:06:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.493 13:06:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.493 13:06:48 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:29.751 [2024-06-11 13:06:48.346401] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:29.751 [2024-06-11 13:06:48.358371] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:29.751 [2024-06-11 13:06:48.358484] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.751 13:06:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.009 13:06:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:30.009 "name": "raid_bdev1", 00:21:30.009 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:30.009 "strip_size_kb": 0, 00:21:30.009 "state": "online", 00:21:30.009 "raid_level": "raid1", 00:21:30.009 "superblock": false, 00:21:30.009 "num_base_bdevs": 4, 00:21:30.009 "num_base_bdevs_discovered": 3, 00:21:30.009 "num_base_bdevs_operational": 3, 00:21:30.009 "base_bdevs_list": [ 00:21:30.009 { 00:21:30.009 "name": null, 00:21:30.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.009 "is_configured": false, 00:21:30.009 "data_offset": 0, 00:21:30.009 "data_size": 65536 00:21:30.009 }, 00:21:30.009 { 00:21:30.009 "name": "BaseBdev2", 00:21:30.009 "uuid": "c8b60d84-2e6a-416c-b97e-0f1caeae9b22", 00:21:30.009 "is_configured": true, 00:21:30.009 "data_offset": 0, 00:21:30.009 "data_size": 65536 00:21:30.009 }, 00:21:30.009 { 00:21:30.009 "name": "BaseBdev3", 00:21:30.009 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:30.009 "is_configured": true, 00:21:30.009 "data_offset": 0, 00:21:30.009 "data_size": 65536 00:21:30.009 }, 00:21:30.009 { 00:21:30.009 "name": "BaseBdev4", 00:21:30.009 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:30.009 "is_configured": true, 00:21:30.009 "data_offset": 0, 00:21:30.009 "data_size": 65536 00:21:30.009 } 00:21:30.009 ] 00:21:30.009 }' 00:21:30.009 13:06:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:30.009 13:06:48 -- common/autotest_common.sh@10 -- # set +x 00:21:30.576 13:06:49 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.576 13:06:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.576 13:06:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:30.576 13:06:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:30.576 13:06:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.576 13:06:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.576 13:06:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.834 13:06:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.834 "name": "raid_bdev1", 00:21:30.834 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:30.834 "strip_size_kb": 0, 00:21:30.834 "state": "online", 00:21:30.834 "raid_level": "raid1", 00:21:30.834 "superblock": false, 00:21:30.834 "num_base_bdevs": 4, 00:21:30.834 "num_base_bdevs_discovered": 3, 00:21:30.834 "num_base_bdevs_operational": 3, 00:21:30.834 "base_bdevs_list": [ 00:21:30.834 { 00:21:30.834 "name": null, 00:21:30.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.834 "is_configured": false, 00:21:30.834 "data_offset": 0, 00:21:30.834 "data_size": 65536 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "name": "BaseBdev2", 00:21:30.834 "uuid": "c8b60d84-2e6a-416c-b97e-0f1caeae9b22", 00:21:30.834 "is_configured": true, 00:21:30.834 "data_offset": 0, 00:21:30.834 "data_size": 65536 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "name": "BaseBdev3", 00:21:30.834 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:30.834 "is_configured": true, 00:21:30.834 "data_offset": 0, 00:21:30.834 "data_size": 65536 00:21:30.834 }, 00:21:30.834 { 00:21:30.834 "name": "BaseBdev4", 00:21:30.834 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:30.834 "is_configured": true, 00:21:30.834 "data_offset": 0, 00:21:30.834 "data_size": 65536 00:21:30.834 } 00:21:30.834 ] 00:21:30.834 }' 00:21:30.834 13:06:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.834 13:06:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:30.834 13:06:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.835 13:06:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:30.835 13:06:49 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:31.093 [2024-06-11 13:06:49.845723] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:31.093 [2024-06-11 13:06:49.845775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:31.093 [2024-06-11 13:06:49.855854] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b840 00:21:31.093 [2024-06-11 13:06:49.857895] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.093 13:06:49 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:32.469 13:06:50 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.469 13:06:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.469 13:06:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:32.469 13:06:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:32.469 13:06:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.469 13:06:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.469 13:06:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.469 "name": "raid_bdev1", 00:21:32.469 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:32.469 "strip_size_kb": 0, 00:21:32.469 "state": "online", 00:21:32.469 "raid_level": "raid1", 00:21:32.469 "superblock": false, 00:21:32.469 "num_base_bdevs": 4, 00:21:32.469 "num_base_bdevs_discovered": 4, 00:21:32.469 "num_base_bdevs_operational": 4, 00:21:32.469 "process": { 00:21:32.469 "type": "rebuild", 00:21:32.469 "target": "spare", 00:21:32.469 "progress": { 00:21:32.469 "blocks": 24576, 00:21:32.469 "percent": 37 00:21:32.469 } 00:21:32.469 }, 00:21:32.469 "base_bdevs_list": [ 00:21:32.469 { 00:21:32.469 "name": "spare", 00:21:32.469 "uuid": "837b679c-1b2a-56f8-9d37-dabb1e0938d9", 00:21:32.469 "is_configured": true, 00:21:32.469 "data_offset": 0, 00:21:32.469 "data_size": 65536 00:21:32.469 }, 00:21:32.469 { 00:21:32.469 "name": "BaseBdev2", 00:21:32.469 "uuid": "c8b60d84-2e6a-416c-b97e-0f1caeae9b22", 00:21:32.469 "is_configured": true, 00:21:32.469 "data_offset": 0, 00:21:32.469 "data_size": 65536 00:21:32.469 }, 00:21:32.469 { 00:21:32.469 "name": "BaseBdev3", 00:21:32.469 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:32.469 "is_configured": true, 00:21:32.469 "data_offset": 0, 00:21:32.469 "data_size": 65536 00:21:32.469 }, 00:21:32.469 { 00:21:32.469 "name": "BaseBdev4", 00:21:32.469 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:32.469 "is_configured": true, 00:21:32.469 "data_offset": 0, 00:21:32.469 "data_size": 65536 00:21:32.469 } 00:21:32.469 ] 00:21:32.469 }' 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:32.469 13:06:51 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:32.727 [2024-06-11 13:06:51.480707] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.985 [2024-06-11 13:06:51.569204] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0b840 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.985 13:06:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.243 "name": "raid_bdev1", 00:21:33.243 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:33.243 "strip_size_kb": 0, 00:21:33.243 "state": "online", 00:21:33.243 "raid_level": "raid1", 00:21:33.243 "superblock": false, 00:21:33.243 "num_base_bdevs": 4, 00:21:33.243 "num_base_bdevs_discovered": 3, 00:21:33.243 "num_base_bdevs_operational": 3, 00:21:33.243 "process": { 00:21:33.243 "type": "rebuild", 00:21:33.243 "target": "spare", 00:21:33.243 "progress": { 00:21:33.243 "blocks": 38912, 00:21:33.243 "percent": 59 00:21:33.243 } 00:21:33.243 }, 00:21:33.243 "base_bdevs_list": [ 00:21:33.243 { 00:21:33.243 "name": "spare", 00:21:33.243 "uuid": "837b679c-1b2a-56f8-9d37-dabb1e0938d9", 00:21:33.243 "is_configured": true, 00:21:33.243 "data_offset": 0, 00:21:33.243 "data_size": 65536 00:21:33.243 }, 00:21:33.243 { 00:21:33.243 "name": null, 00:21:33.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.243 "is_configured": false, 00:21:33.243 "data_offset": 0, 00:21:33.243 "data_size": 65536 00:21:33.243 }, 00:21:33.243 { 00:21:33.243 "name": "BaseBdev3", 00:21:33.243 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:33.243 "is_configured": true, 00:21:33.243 "data_offset": 0, 00:21:33.243 "data_size": 65536 00:21:33.243 }, 00:21:33.243 { 00:21:33.243 "name": "BaseBdev4", 00:21:33.243 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:33.243 "is_configured": true, 00:21:33.243 "data_offset": 0, 00:21:33.243 "data_size": 65536 00:21:33.243 } 00:21:33.243 ] 00:21:33.243 }' 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@657 -- # local timeout=483 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.243 13:06:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.504 13:06:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.504 "name": "raid_bdev1", 00:21:33.504 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:33.504 "strip_size_kb": 0, 00:21:33.504 "state": "online", 00:21:33.504 "raid_level": "raid1", 00:21:33.504 "superblock": false, 00:21:33.504 "num_base_bdevs": 4, 00:21:33.504 "num_base_bdevs_discovered": 3, 00:21:33.504 "num_base_bdevs_operational": 3, 00:21:33.504 "process": { 00:21:33.504 "type": "rebuild", 00:21:33.504 "target": "spare", 00:21:33.504 "progress": { 00:21:33.504 "blocks": 47104, 00:21:33.504 "percent": 71 00:21:33.504 } 00:21:33.504 }, 00:21:33.504 "base_bdevs_list": [ 00:21:33.504 { 00:21:33.504 "name": "spare", 00:21:33.504 "uuid": "837b679c-1b2a-56f8-9d37-dabb1e0938d9", 00:21:33.504 "is_configured": true, 00:21:33.504 "data_offset": 0, 00:21:33.504 "data_size": 65536 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "name": null, 00:21:33.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.504 "is_configured": false, 00:21:33.504 "data_offset": 0, 00:21:33.504 "data_size": 65536 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "name": "BaseBdev3", 00:21:33.504 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:33.504 "is_configured": true, 00:21:33.504 "data_offset": 0, 00:21:33.504 "data_size": 65536 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "name": "BaseBdev4", 00:21:33.504 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:33.504 "is_configured": true, 00:21:33.504 "data_offset": 0, 00:21:33.504 "data_size": 65536 00:21:33.504 } 00:21:33.504 ] 00:21:33.504 }' 00:21:33.504 13:06:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.504 13:06:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.504 13:06:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.504 13:06:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.504 13:06:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:34.442 [2024-06-11 13:06:53.078693] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:34.442 [2024-06-11 13:06:53.078781] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:34.442 [2024-06-11 13:06:53.078865] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.699 13:06:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:34.699 13:06:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.699 13:06:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:34.699 13:06:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:34.699 13:06:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:34.699 13:06:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:34.699 13:06:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.699 13:06:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:34.957 "name": "raid_bdev1", 00:21:34.957 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:34.957 "strip_size_kb": 0, 00:21:34.957 "state": "online", 00:21:34.957 "raid_level": "raid1", 00:21:34.957 "superblock": false, 00:21:34.957 "num_base_bdevs": 4, 00:21:34.957 "num_base_bdevs_discovered": 3, 00:21:34.957 "num_base_bdevs_operational": 3, 00:21:34.957 "base_bdevs_list": [ 00:21:34.957 { 00:21:34.957 "name": "spare", 00:21:34.957 "uuid": "837b679c-1b2a-56f8-9d37-dabb1e0938d9", 00:21:34.957 "is_configured": true, 00:21:34.957 "data_offset": 0, 00:21:34.957 "data_size": 65536 00:21:34.957 }, 00:21:34.957 { 00:21:34.957 "name": null, 00:21:34.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.957 "is_configured": false, 00:21:34.957 "data_offset": 0, 00:21:34.957 "data_size": 65536 00:21:34.957 }, 00:21:34.957 { 00:21:34.957 "name": "BaseBdev3", 00:21:34.957 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:34.957 "is_configured": true, 00:21:34.957 "data_offset": 0, 00:21:34.957 "data_size": 65536 00:21:34.957 }, 00:21:34.957 { 00:21:34.957 "name": "BaseBdev4", 00:21:34.957 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:34.957 "is_configured": true, 00:21:34.957 "data_offset": 0, 00:21:34.957 "data_size": 65536 00:21:34.957 } 00:21:34.957 ] 00:21:34.957 }' 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@660 -- # break 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.957 13:06:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.216 13:06:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.216 "name": "raid_bdev1", 00:21:35.216 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:35.216 "strip_size_kb": 0, 00:21:35.216 "state": "online", 00:21:35.216 "raid_level": "raid1", 00:21:35.216 "superblock": false, 00:21:35.216 "num_base_bdevs": 4, 00:21:35.216 "num_base_bdevs_discovered": 3, 00:21:35.216 "num_base_bdevs_operational": 3, 00:21:35.216 "base_bdevs_list": [ 00:21:35.216 { 00:21:35.216 "name": "spare", 00:21:35.216 "uuid": "837b679c-1b2a-56f8-9d37-dabb1e0938d9", 00:21:35.216 "is_configured": true, 00:21:35.216 "data_offset": 0, 00:21:35.216 "data_size": 65536 00:21:35.216 }, 00:21:35.216 { 00:21:35.216 "name": null, 00:21:35.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.216 "is_configured": false, 00:21:35.216 "data_offset": 0, 00:21:35.216 "data_size": 65536 00:21:35.216 }, 00:21:35.216 { 00:21:35.216 "name": "BaseBdev3", 00:21:35.216 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:35.216 "is_configured": true, 00:21:35.216 "data_offset": 0, 00:21:35.216 "data_size": 65536 00:21:35.216 }, 00:21:35.216 { 00:21:35.216 "name": "BaseBdev4", 00:21:35.216 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:35.216 "is_configured": true, 00:21:35.216 "data_offset": 0, 00:21:35.216 "data_size": 65536 00:21:35.216 } 00:21:35.216 ] 00:21:35.216 }' 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.217 13:06:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.475 13:06:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.475 "name": "raid_bdev1", 00:21:35.475 "uuid": "b6a284d2-0d75-47a4-9dec-d6a5c6b2b499", 00:21:35.475 "strip_size_kb": 0, 00:21:35.475 "state": "online", 00:21:35.475 "raid_level": "raid1", 00:21:35.475 "superblock": false, 00:21:35.475 "num_base_bdevs": 4, 00:21:35.475 "num_base_bdevs_discovered": 3, 00:21:35.475 "num_base_bdevs_operational": 3, 00:21:35.475 "base_bdevs_list": [ 00:21:35.475 { 00:21:35.475 "name": "spare", 00:21:35.475 "uuid": "837b679c-1b2a-56f8-9d37-dabb1e0938d9", 00:21:35.475 "is_configured": true, 00:21:35.475 "data_offset": 0, 00:21:35.475 "data_size": 65536 00:21:35.475 }, 00:21:35.475 { 00:21:35.475 "name": null, 00:21:35.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.475 "is_configured": false, 00:21:35.475 "data_offset": 0, 00:21:35.475 "data_size": 65536 00:21:35.475 }, 00:21:35.475 { 00:21:35.475 "name": "BaseBdev3", 00:21:35.475 "uuid": "57d3850f-7b20-4d36-b76f-5fb96e76bea9", 00:21:35.475 "is_configured": true, 00:21:35.475 "data_offset": 0, 00:21:35.475 "data_size": 65536 00:21:35.475 }, 00:21:35.475 { 00:21:35.475 "name": "BaseBdev4", 00:21:35.475 "uuid": "4840b434-8d26-491e-b2e1-f92ca7d15367", 00:21:35.475 "is_configured": true, 00:21:35.475 "data_offset": 0, 00:21:35.475 "data_size": 65536 00:21:35.475 } 00:21:35.475 ] 00:21:35.475 }' 00:21:35.475 13:06:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.475 13:06:54 -- common/autotest_common.sh@10 -- # set +x 00:21:36.411 13:06:54 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:36.411 [2024-06-11 13:06:55.079084] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.411 [2024-06-11 13:06:55.079125] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.411 [2024-06-11 13:06:55.079244] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.411 [2024-06-11 13:06:55.079344] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.411 [2024-06-11 13:06:55.079358] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:36.411 13:06:55 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.411 13:06:55 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:36.670 13:06:55 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:36.670 13:06:55 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:36.670 13:06:55 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@12 -- # local i 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.670 13:06:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:36.929 /dev/nbd0 00:21:36.929 13:06:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.929 13:06:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.929 13:06:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:36.929 13:06:55 -- common/autotest_common.sh@857 -- # local i 00:21:36.929 13:06:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:36.929 13:06:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:36.929 13:06:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:36.929 13:06:55 -- common/autotest_common.sh@861 -- # break 00:21:36.929 13:06:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:36.929 13:06:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:36.929 13:06:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.929 1+0 records in 00:21:36.929 1+0 records out 00:21:36.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423053 s, 9.7 MB/s 00:21:36.929 13:06:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.929 13:06:55 -- common/autotest_common.sh@874 -- # size=4096 00:21:36.929 13:06:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.929 13:06:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:36.929 13:06:55 -- common/autotest_common.sh@877 -- # return 0 00:21:36.929 13:06:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.929 13:06:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.929 13:06:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:37.189 /dev/nbd1 00:21:37.189 13:06:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:37.189 13:06:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:37.189 13:06:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:37.189 13:06:55 -- common/autotest_common.sh@857 -- # local i 00:21:37.189 13:06:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:37.189 13:06:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:37.189 13:06:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:37.189 13:06:55 -- common/autotest_common.sh@861 -- # break 00:21:37.189 13:06:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:37.189 13:06:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:37.189 13:06:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.189 1+0 records in 00:21:37.189 1+0 records out 00:21:37.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499108 s, 8.2 MB/s 00:21:37.189 13:06:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.189 13:06:55 -- common/autotest_common.sh@874 -- # size=4096 00:21:37.189 13:06:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.189 13:06:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:37.189 13:06:55 -- common/autotest_common.sh@877 -- # return 0 00:21:37.189 13:06:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.189 13:06:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:37.189 13:06:55 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:37.448 13:06:56 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:37.448 13:06:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.448 13:06:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:37.448 13:06:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.448 13:06:56 -- bdev/nbd_common.sh@51 -- # local i 00:21:37.448 13:06:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.448 13:06:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@41 -- # break 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.706 13:06:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@41 -- # break 00:21:37.965 13:06:56 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.965 13:06:56 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:37.965 13:06:56 -- bdev/bdev_raid.sh@709 -- # killprocess 127995 00:21:37.965 13:06:56 -- common/autotest_common.sh@926 -- # '[' -z 127995 ']' 00:21:37.965 13:06:56 -- common/autotest_common.sh@930 -- # kill -0 127995 00:21:37.965 13:06:56 -- common/autotest_common.sh@931 -- # uname 00:21:37.965 13:06:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.965 13:06:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127995 00:21:37.965 13:06:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:37.965 13:06:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:37.965 13:06:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127995' 00:21:37.965 killing process with pid 127995 00:21:37.965 Received shutdown signal, test time was about 60.000000 seconds 00:21:37.965 00:21:37.965 Latency(us) 00:21:37.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.966 =================================================================================================================== 00:21:37.966 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:37.966 13:06:56 -- common/autotest_common.sh@945 -- # kill 127995 00:21:37.966 13:06:56 -- common/autotest_common.sh@950 -- # wait 127995 00:21:37.966 [2024-06-11 13:06:56.770612] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:38.533 [2024-06-11 13:06:57.137744] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:39.469 ************************************ 00:21:39.469 END TEST raid_rebuild_test 00:21:39.469 ************************************ 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:39.469 00:21:39.469 real 0m23.580s 00:21:39.469 user 0m32.682s 00:21:39.469 sys 0m4.174s 00:21:39.469 13:06:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.469 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:39.469 13:06:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:39.469 13:06:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.469 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:39.469 ************************************ 00:21:39.469 START TEST raid_rebuild_test_sb 00:21:39.469 ************************************ 00:21:39.469 13:06:58 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:39.469 13:06:58 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:39.470 13:06:58 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:39.470 13:06:58 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:39.470 13:06:58 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:39.470 13:06:58 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:39.470 13:06:58 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:39.470 13:06:58 -- bdev/bdev_raid.sh@544 -- # raid_pid=128617 00:21:39.470 13:06:58 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128617 /var/tmp/spdk-raid.sock 00:21:39.470 13:06:58 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:39.470 13:06:58 -- common/autotest_common.sh@819 -- # '[' -z 128617 ']' 00:21:39.470 13:06:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:39.470 13:06:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:39.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:39.470 13:06:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:39.470 13:06:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:39.470 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:21:39.470 [2024-06-11 13:06:58.305699] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:39.470 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:39.470 Zero copy mechanism will not be used. 00:21:39.470 [2024-06-11 13:06:58.305946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128617 ] 00:21:39.728 [2024-06-11 13:06:58.457326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.987 [2024-06-11 13:06:58.666673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.246 [2024-06-11 13:06:58.862457] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:40.505 13:06:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:40.505 13:06:59 -- common/autotest_common.sh@852 -- # return 0 00:21:40.505 13:06:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:40.505 13:06:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:40.505 13:06:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:40.764 BaseBdev1_malloc 00:21:40.764 13:06:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:41.022 [2024-06-11 13:06:59.621092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:41.022 [2024-06-11 13:06:59.621206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.022 [2024-06-11 13:06:59.621244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:41.022 [2024-06-11 13:06:59.621305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.022 [2024-06-11 13:06:59.623821] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.022 [2024-06-11 13:06:59.623872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:41.022 BaseBdev1 00:21:41.022 13:06:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:41.022 13:06:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:41.022 13:06:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:41.022 BaseBdev2_malloc 00:21:41.281 13:06:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:41.281 [2024-06-11 13:07:00.049790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:41.281 [2024-06-11 13:07:00.049884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.281 [2024-06-11 13:07:00.049935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:41.281 [2024-06-11 13:07:00.050017] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.281 [2024-06-11 13:07:00.052371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.281 [2024-06-11 13:07:00.052424] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:41.281 BaseBdev2 00:21:41.281 13:07:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:41.281 13:07:00 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:41.281 13:07:00 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:41.540 BaseBdev3_malloc 00:21:41.540 13:07:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:41.798 [2024-06-11 13:07:00.456172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:41.798 [2024-06-11 13:07:00.456253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.798 [2024-06-11 13:07:00.456299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:41.798 [2024-06-11 13:07:00.456353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.798 [2024-06-11 13:07:00.458581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.798 [2024-06-11 13:07:00.458638] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:41.799 BaseBdev3 00:21:41.799 13:07:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:41.799 13:07:00 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:41.799 13:07:00 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:42.057 BaseBdev4_malloc 00:21:42.057 13:07:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:42.057 [2024-06-11 13:07:00.857644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:42.057 [2024-06-11 13:07:00.857736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.057 [2024-06-11 13:07:00.857778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:42.057 [2024-06-11 13:07:00.857836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.057 [2024-06-11 13:07:00.860164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.057 [2024-06-11 13:07:00.860222] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:42.057 BaseBdev4 00:21:42.057 13:07:00 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:42.316 spare_malloc 00:21:42.316 13:07:01 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:42.575 spare_delay 00:21:42.575 13:07:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:42.834 [2024-06-11 13:07:01.470699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:42.834 [2024-06-11 13:07:01.470785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.834 [2024-06-11 13:07:01.470823] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:42.834 [2024-06-11 13:07:01.470873] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.834 [2024-06-11 13:07:01.473248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.834 [2024-06-11 13:07:01.473314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:42.834 spare 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:42.834 [2024-06-11 13:07:01.654818] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.834 [2024-06-11 13:07:01.656744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:42.834 [2024-06-11 13:07:01.656841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:42.834 [2024-06-11 13:07:01.656902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:42.834 [2024-06-11 13:07:01.657123] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:21:42.834 [2024-06-11 13:07:01.657146] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:42.834 [2024-06-11 13:07:01.657259] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:42.834 [2024-06-11 13:07:01.657651] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:21:42.834 [2024-06-11 13:07:01.657675] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:21:42.834 [2024-06-11 13:07:01.657801] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.834 13:07:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.093 13:07:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.093 "name": "raid_bdev1", 00:21:43.093 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:43.093 "strip_size_kb": 0, 00:21:43.093 "state": "online", 00:21:43.093 "raid_level": "raid1", 00:21:43.093 "superblock": true, 00:21:43.093 "num_base_bdevs": 4, 00:21:43.093 "num_base_bdevs_discovered": 4, 00:21:43.093 "num_base_bdevs_operational": 4, 00:21:43.093 "base_bdevs_list": [ 00:21:43.093 { 00:21:43.093 "name": "BaseBdev1", 00:21:43.093 "uuid": "67658edb-b87b-5ce0-b35c-95654d4c5df2", 00:21:43.093 "is_configured": true, 00:21:43.093 "data_offset": 2048, 00:21:43.093 "data_size": 63488 00:21:43.093 }, 00:21:43.093 { 00:21:43.093 "name": "BaseBdev2", 00:21:43.093 "uuid": "7c84d1b9-b140-56dc-9b51-8fdec617faa0", 00:21:43.093 "is_configured": true, 00:21:43.093 "data_offset": 2048, 00:21:43.093 "data_size": 63488 00:21:43.093 }, 00:21:43.093 { 00:21:43.093 "name": "BaseBdev3", 00:21:43.093 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:43.093 "is_configured": true, 00:21:43.093 "data_offset": 2048, 00:21:43.093 "data_size": 63488 00:21:43.093 }, 00:21:43.093 { 00:21:43.093 "name": "BaseBdev4", 00:21:43.093 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:43.093 "is_configured": true, 00:21:43.093 "data_offset": 2048, 00:21:43.093 "data_size": 63488 00:21:43.093 } 00:21:43.093 ] 00:21:43.093 }' 00:21:43.093 13:07:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.093 13:07:01 -- common/autotest_common.sh@10 -- # set +x 00:21:44.028 13:07:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:44.028 13:07:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:44.028 [2024-06-11 13:07:02.771186] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.028 13:07:02 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:44.028 13:07:02 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.028 13:07:02 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:44.286 13:07:02 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:44.286 13:07:02 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:44.286 13:07:02 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:44.286 13:07:02 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:44.286 13:07:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:44.286 13:07:02 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:44.286 13:07:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:44.286 13:07:02 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:44.286 13:07:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:44.286 13:07:02 -- bdev/nbd_common.sh@12 -- # local i 00:21:44.286 13:07:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:44.287 13:07:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:44.287 13:07:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:44.545 [2024-06-11 13:07:03.159096] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:44.545 /dev/nbd0 00:21:44.545 13:07:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:44.545 13:07:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:44.545 13:07:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:44.545 13:07:03 -- common/autotest_common.sh@857 -- # local i 00:21:44.545 13:07:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:44.545 13:07:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:44.545 13:07:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:44.545 13:07:03 -- common/autotest_common.sh@861 -- # break 00:21:44.545 13:07:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:44.545 13:07:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:44.545 13:07:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:44.545 1+0 records in 00:21:44.545 1+0 records out 00:21:44.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462831 s, 8.8 MB/s 00:21:44.545 13:07:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.545 13:07:03 -- common/autotest_common.sh@874 -- # size=4096 00:21:44.545 13:07:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.545 13:07:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:44.545 13:07:03 -- common/autotest_common.sh@877 -- # return 0 00:21:44.545 13:07:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:44.545 13:07:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:44.545 13:07:03 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:44.545 13:07:03 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:44.545 13:07:03 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:51.131 63488+0 records in 00:21:51.131 63488+0 records out 00:21:51.131 32505856 bytes (33 MB, 31 MiB) copied, 6.66725 s, 4.9 MB/s 00:21:51.131 13:07:09 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:51.131 13:07:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:51.131 13:07:09 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:51.131 13:07:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:51.131 13:07:09 -- bdev/nbd_common.sh@51 -- # local i 00:21:51.131 13:07:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:51.131 13:07:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:51.389 13:07:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:51.389 13:07:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:51.389 13:07:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:51.389 13:07:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:51.389 13:07:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:51.389 13:07:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:51.389 13:07:10 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:51.389 [2024-06-11 13:07:10.131789] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.647 13:07:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:51.647 13:07:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:51.647 13:07:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:51.647 13:07:10 -- bdev/nbd_common.sh@41 -- # break 00:21:51.647 13:07:10 -- bdev/nbd_common.sh@45 -- # return 0 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:51.647 [2024-06-11 13:07:10.459460] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.647 13:07:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.906 13:07:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.906 "name": "raid_bdev1", 00:21:51.906 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:51.906 "strip_size_kb": 0, 00:21:51.906 "state": "online", 00:21:51.906 "raid_level": "raid1", 00:21:51.906 "superblock": true, 00:21:51.906 "num_base_bdevs": 4, 00:21:51.906 "num_base_bdevs_discovered": 3, 00:21:51.906 "num_base_bdevs_operational": 3, 00:21:51.906 "base_bdevs_list": [ 00:21:51.906 { 00:21:51.906 "name": null, 00:21:51.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.906 "is_configured": false, 00:21:51.906 "data_offset": 2048, 00:21:51.906 "data_size": 63488 00:21:51.906 }, 00:21:51.906 { 00:21:51.906 "name": "BaseBdev2", 00:21:51.906 "uuid": "7c84d1b9-b140-56dc-9b51-8fdec617faa0", 00:21:51.906 "is_configured": true, 00:21:51.906 "data_offset": 2048, 00:21:51.906 "data_size": 63488 00:21:51.906 }, 00:21:51.906 { 00:21:51.906 "name": "BaseBdev3", 00:21:51.906 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:51.906 "is_configured": true, 00:21:51.906 "data_offset": 2048, 00:21:51.906 "data_size": 63488 00:21:51.906 }, 00:21:51.906 { 00:21:51.906 "name": "BaseBdev4", 00:21:51.906 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:51.906 "is_configured": true, 00:21:51.906 "data_offset": 2048, 00:21:51.906 "data_size": 63488 00:21:51.906 } 00:21:51.906 ] 00:21:51.906 }' 00:21:51.906 13:07:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.906 13:07:10 -- common/autotest_common.sh@10 -- # set +x 00:21:52.842 13:07:11 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.842 [2024-06-11 13:07:11.675345] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:52.842 [2024-06-11 13:07:11.675414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:53.100 [2024-06-11 13:07:11.686188] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5170 00:21:53.100 [2024-06-11 13:07:11.688166] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.100 13:07:11 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:54.033 13:07:12 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.034 13:07:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.034 13:07:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:54.034 13:07:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:54.034 13:07:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.034 13:07:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.034 13:07:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.291 13:07:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.291 "name": "raid_bdev1", 00:21:54.291 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:54.291 "strip_size_kb": 0, 00:21:54.291 "state": "online", 00:21:54.291 "raid_level": "raid1", 00:21:54.291 "superblock": true, 00:21:54.291 "num_base_bdevs": 4, 00:21:54.291 "num_base_bdevs_discovered": 4, 00:21:54.291 "num_base_bdevs_operational": 4, 00:21:54.291 "process": { 00:21:54.291 "type": "rebuild", 00:21:54.291 "target": "spare", 00:21:54.291 "progress": { 00:21:54.291 "blocks": 24576, 00:21:54.291 "percent": 38 00:21:54.291 } 00:21:54.291 }, 00:21:54.291 "base_bdevs_list": [ 00:21:54.291 { 00:21:54.291 "name": "spare", 00:21:54.291 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:21:54.291 "is_configured": true, 00:21:54.291 "data_offset": 2048, 00:21:54.292 "data_size": 63488 00:21:54.292 }, 00:21:54.292 { 00:21:54.292 "name": "BaseBdev2", 00:21:54.292 "uuid": "7c84d1b9-b140-56dc-9b51-8fdec617faa0", 00:21:54.292 "is_configured": true, 00:21:54.292 "data_offset": 2048, 00:21:54.292 "data_size": 63488 00:21:54.292 }, 00:21:54.292 { 00:21:54.292 "name": "BaseBdev3", 00:21:54.292 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:54.292 "is_configured": true, 00:21:54.292 "data_offset": 2048, 00:21:54.292 "data_size": 63488 00:21:54.292 }, 00:21:54.292 { 00:21:54.292 "name": "BaseBdev4", 00:21:54.292 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:54.292 "is_configured": true, 00:21:54.292 "data_offset": 2048, 00:21:54.292 "data_size": 63488 00:21:54.292 } 00:21:54.292 ] 00:21:54.292 }' 00:21:54.292 13:07:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.292 13:07:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.292 13:07:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.292 13:07:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.292 13:07:13 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:54.549 [2024-06-11 13:07:13.274535] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:54.549 [2024-06-11 13:07:13.298491] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:54.549 [2024-06-11 13:07:13.298578] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.549 13:07:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.807 13:07:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.807 "name": "raid_bdev1", 00:21:54.807 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:54.807 "strip_size_kb": 0, 00:21:54.807 "state": "online", 00:21:54.807 "raid_level": "raid1", 00:21:54.807 "superblock": true, 00:21:54.807 "num_base_bdevs": 4, 00:21:54.807 "num_base_bdevs_discovered": 3, 00:21:54.807 "num_base_bdevs_operational": 3, 00:21:54.807 "base_bdevs_list": [ 00:21:54.807 { 00:21:54.807 "name": null, 00:21:54.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.807 "is_configured": false, 00:21:54.807 "data_offset": 2048, 00:21:54.807 "data_size": 63488 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "name": "BaseBdev2", 00:21:54.807 "uuid": "7c84d1b9-b140-56dc-9b51-8fdec617faa0", 00:21:54.807 "is_configured": true, 00:21:54.807 "data_offset": 2048, 00:21:54.807 "data_size": 63488 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "name": "BaseBdev3", 00:21:54.807 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:54.807 "is_configured": true, 00:21:54.807 "data_offset": 2048, 00:21:54.807 "data_size": 63488 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "name": "BaseBdev4", 00:21:54.807 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:54.807 "is_configured": true, 00:21:54.807 "data_offset": 2048, 00:21:54.807 "data_size": 63488 00:21:54.807 } 00:21:54.807 ] 00:21:54.807 }' 00:21:54.807 13:07:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.807 13:07:13 -- common/autotest_common.sh@10 -- # set +x 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:55.741 "name": "raid_bdev1", 00:21:55.741 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:55.741 "strip_size_kb": 0, 00:21:55.741 "state": "online", 00:21:55.741 "raid_level": "raid1", 00:21:55.741 "superblock": true, 00:21:55.741 "num_base_bdevs": 4, 00:21:55.741 "num_base_bdevs_discovered": 3, 00:21:55.741 "num_base_bdevs_operational": 3, 00:21:55.741 "base_bdevs_list": [ 00:21:55.741 { 00:21:55.741 "name": null, 00:21:55.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.741 "is_configured": false, 00:21:55.741 "data_offset": 2048, 00:21:55.741 "data_size": 63488 00:21:55.741 }, 00:21:55.741 { 00:21:55.741 "name": "BaseBdev2", 00:21:55.741 "uuid": "7c84d1b9-b140-56dc-9b51-8fdec617faa0", 00:21:55.741 "is_configured": true, 00:21:55.741 "data_offset": 2048, 00:21:55.741 "data_size": 63488 00:21:55.741 }, 00:21:55.741 { 00:21:55.741 "name": "BaseBdev3", 00:21:55.741 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:55.741 "is_configured": true, 00:21:55.741 "data_offset": 2048, 00:21:55.741 "data_size": 63488 00:21:55.741 }, 00:21:55.741 { 00:21:55.741 "name": "BaseBdev4", 00:21:55.741 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:55.741 "is_configured": true, 00:21:55.741 "data_offset": 2048, 00:21:55.741 "data_size": 63488 00:21:55.741 } 00:21:55.741 ] 00:21:55.741 }' 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:55.741 13:07:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:55.999 13:07:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:55.999 13:07:14 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:55.999 [2024-06-11 13:07:14.775806] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:55.999 [2024-06-11 13:07:14.775884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:55.999 [2024-06-11 13:07:14.786286] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5310 00:21:55.999 [2024-06-11 13:07:14.788152] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:55.999 13:07:14 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:57.375 13:07:15 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.375 13:07:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.375 13:07:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:57.375 13:07:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:57.375 13:07:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.375 13:07:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.375 13:07:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.375 "name": "raid_bdev1", 00:21:57.375 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:57.375 "strip_size_kb": 0, 00:21:57.375 "state": "online", 00:21:57.375 "raid_level": "raid1", 00:21:57.375 "superblock": true, 00:21:57.375 "num_base_bdevs": 4, 00:21:57.375 "num_base_bdevs_discovered": 4, 00:21:57.375 "num_base_bdevs_operational": 4, 00:21:57.375 "process": { 00:21:57.375 "type": "rebuild", 00:21:57.375 "target": "spare", 00:21:57.375 "progress": { 00:21:57.375 "blocks": 24576, 00:21:57.375 "percent": 38 00:21:57.375 } 00:21:57.375 }, 00:21:57.375 "base_bdevs_list": [ 00:21:57.375 { 00:21:57.375 "name": "spare", 00:21:57.375 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:21:57.375 "is_configured": true, 00:21:57.375 "data_offset": 2048, 00:21:57.375 "data_size": 63488 00:21:57.375 }, 00:21:57.375 { 00:21:57.375 "name": "BaseBdev2", 00:21:57.375 "uuid": "7c84d1b9-b140-56dc-9b51-8fdec617faa0", 00:21:57.375 "is_configured": true, 00:21:57.375 "data_offset": 2048, 00:21:57.375 "data_size": 63488 00:21:57.375 }, 00:21:57.375 { 00:21:57.375 "name": "BaseBdev3", 00:21:57.375 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:57.375 "is_configured": true, 00:21:57.375 "data_offset": 2048, 00:21:57.375 "data_size": 63488 00:21:57.375 }, 00:21:57.375 { 00:21:57.375 "name": "BaseBdev4", 00:21:57.375 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:57.375 "is_configured": true, 00:21:57.375 "data_offset": 2048, 00:21:57.375 "data_size": 63488 00:21:57.375 } 00:21:57.375 ] 00:21:57.375 }' 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:57.375 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:57.375 13:07:16 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:57.634 [2024-06-11 13:07:16.346440] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:57.634 [2024-06-11 13:07:16.398380] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5310 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.892 "name": "raid_bdev1", 00:21:57.892 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:57.892 "strip_size_kb": 0, 00:21:57.892 "state": "online", 00:21:57.892 "raid_level": "raid1", 00:21:57.892 "superblock": true, 00:21:57.892 "num_base_bdevs": 4, 00:21:57.892 "num_base_bdevs_discovered": 3, 00:21:57.892 "num_base_bdevs_operational": 3, 00:21:57.892 "process": { 00:21:57.892 "type": "rebuild", 00:21:57.892 "target": "spare", 00:21:57.892 "progress": { 00:21:57.892 "blocks": 36864, 00:21:57.892 "percent": 58 00:21:57.892 } 00:21:57.892 }, 00:21:57.892 "base_bdevs_list": [ 00:21:57.892 { 00:21:57.892 "name": "spare", 00:21:57.892 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:21:57.892 "is_configured": true, 00:21:57.892 "data_offset": 2048, 00:21:57.892 "data_size": 63488 00:21:57.892 }, 00:21:57.892 { 00:21:57.892 "name": null, 00:21:57.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.892 "is_configured": false, 00:21:57.892 "data_offset": 2048, 00:21:57.892 "data_size": 63488 00:21:57.892 }, 00:21:57.892 { 00:21:57.892 "name": "BaseBdev3", 00:21:57.892 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:57.892 "is_configured": true, 00:21:57.892 "data_offset": 2048, 00:21:57.892 "data_size": 63488 00:21:57.892 }, 00:21:57.892 { 00:21:57.892 "name": "BaseBdev4", 00:21:57.892 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:57.892 "is_configured": true, 00:21:57.892 "data_offset": 2048, 00:21:57.892 "data_size": 63488 00:21:57.892 } 00:21:57.892 ] 00:21:57.892 }' 00:21:57.892 13:07:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@657 -- # local timeout=508 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.149 13:07:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.407 13:07:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:58.407 "name": "raid_bdev1", 00:21:58.407 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:58.407 "strip_size_kb": 0, 00:21:58.407 "state": "online", 00:21:58.407 "raid_level": "raid1", 00:21:58.407 "superblock": true, 00:21:58.407 "num_base_bdevs": 4, 00:21:58.407 "num_base_bdevs_discovered": 3, 00:21:58.407 "num_base_bdevs_operational": 3, 00:21:58.407 "process": { 00:21:58.407 "type": "rebuild", 00:21:58.407 "target": "spare", 00:21:58.407 "progress": { 00:21:58.407 "blocks": 47104, 00:21:58.407 "percent": 74 00:21:58.407 } 00:21:58.407 }, 00:21:58.407 "base_bdevs_list": [ 00:21:58.407 { 00:21:58.407 "name": "spare", 00:21:58.407 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:21:58.407 "is_configured": true, 00:21:58.407 "data_offset": 2048, 00:21:58.407 "data_size": 63488 00:21:58.407 }, 00:21:58.407 { 00:21:58.407 "name": null, 00:21:58.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.407 "is_configured": false, 00:21:58.407 "data_offset": 2048, 00:21:58.407 "data_size": 63488 00:21:58.407 }, 00:21:58.407 { 00:21:58.407 "name": "BaseBdev3", 00:21:58.407 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:58.407 "is_configured": true, 00:21:58.407 "data_offset": 2048, 00:21:58.407 "data_size": 63488 00:21:58.407 }, 00:21:58.407 { 00:21:58.407 "name": "BaseBdev4", 00:21:58.407 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:58.407 "is_configured": true, 00:21:58.407 "data_offset": 2048, 00:21:58.407 "data_size": 63488 00:21:58.407 } 00:21:58.407 ] 00:21:58.407 }' 00:21:58.407 13:07:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:58.407 13:07:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.407 13:07:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:58.407 13:07:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.407 13:07:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:59.340 [2024-06-11 13:07:17.908056] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:59.340 [2024-06-11 13:07:17.908164] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:59.340 [2024-06-11 13:07:17.908359] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.599 13:07:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:59.599 13:07:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.599 13:07:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:59.599 13:07:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:59.599 13:07:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:59.599 13:07:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:59.599 13:07:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.599 13:07:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:59.858 "name": "raid_bdev1", 00:21:59.858 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:21:59.858 "strip_size_kb": 0, 00:21:59.858 "state": "online", 00:21:59.858 "raid_level": "raid1", 00:21:59.858 "superblock": true, 00:21:59.858 "num_base_bdevs": 4, 00:21:59.858 "num_base_bdevs_discovered": 3, 00:21:59.858 "num_base_bdevs_operational": 3, 00:21:59.858 "base_bdevs_list": [ 00:21:59.858 { 00:21:59.858 "name": "spare", 00:21:59.858 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:21:59.858 "is_configured": true, 00:21:59.858 "data_offset": 2048, 00:21:59.858 "data_size": 63488 00:21:59.858 }, 00:21:59.858 { 00:21:59.858 "name": null, 00:21:59.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.858 "is_configured": false, 00:21:59.858 "data_offset": 2048, 00:21:59.858 "data_size": 63488 00:21:59.858 }, 00:21:59.858 { 00:21:59.858 "name": "BaseBdev3", 00:21:59.858 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:21:59.858 "is_configured": true, 00:21:59.858 "data_offset": 2048, 00:21:59.858 "data_size": 63488 00:21:59.858 }, 00:21:59.858 { 00:21:59.858 "name": "BaseBdev4", 00:21:59.858 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:21:59.858 "is_configured": true, 00:21:59.858 "data_offset": 2048, 00:21:59.858 "data_size": 63488 00:21:59.858 } 00:21:59.858 ] 00:21:59.858 }' 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@660 -- # break 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.858 13:07:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:00.122 "name": "raid_bdev1", 00:22:00.122 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:22:00.122 "strip_size_kb": 0, 00:22:00.122 "state": "online", 00:22:00.122 "raid_level": "raid1", 00:22:00.122 "superblock": true, 00:22:00.122 "num_base_bdevs": 4, 00:22:00.122 "num_base_bdevs_discovered": 3, 00:22:00.122 "num_base_bdevs_operational": 3, 00:22:00.122 "base_bdevs_list": [ 00:22:00.122 { 00:22:00.122 "name": "spare", 00:22:00.122 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:22:00.122 "is_configured": true, 00:22:00.122 "data_offset": 2048, 00:22:00.122 "data_size": 63488 00:22:00.122 }, 00:22:00.122 { 00:22:00.122 "name": null, 00:22:00.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.122 "is_configured": false, 00:22:00.122 "data_offset": 2048, 00:22:00.122 "data_size": 63488 00:22:00.122 }, 00:22:00.122 { 00:22:00.122 "name": "BaseBdev3", 00:22:00.122 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:22:00.122 "is_configured": true, 00:22:00.122 "data_offset": 2048, 00:22:00.122 "data_size": 63488 00:22:00.122 }, 00:22:00.122 { 00:22:00.122 "name": "BaseBdev4", 00:22:00.122 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:22:00.122 "is_configured": true, 00:22:00.122 "data_offset": 2048, 00:22:00.122 "data_size": 63488 00:22:00.122 } 00:22:00.122 ] 00:22:00.122 }' 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:00.122 13:07:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:00.123 13:07:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:00.123 13:07:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:00.123 13:07:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:00.123 13:07:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:00.123 13:07:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.123 13:07:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.386 13:07:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:00.386 "name": "raid_bdev1", 00:22:00.386 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:22:00.386 "strip_size_kb": 0, 00:22:00.386 "state": "online", 00:22:00.386 "raid_level": "raid1", 00:22:00.386 "superblock": true, 00:22:00.386 "num_base_bdevs": 4, 00:22:00.386 "num_base_bdevs_discovered": 3, 00:22:00.386 "num_base_bdevs_operational": 3, 00:22:00.386 "base_bdevs_list": [ 00:22:00.386 { 00:22:00.386 "name": "spare", 00:22:00.386 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:22:00.386 "is_configured": true, 00:22:00.386 "data_offset": 2048, 00:22:00.386 "data_size": 63488 00:22:00.386 }, 00:22:00.386 { 00:22:00.386 "name": null, 00:22:00.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.386 "is_configured": false, 00:22:00.386 "data_offset": 2048, 00:22:00.386 "data_size": 63488 00:22:00.386 }, 00:22:00.386 { 00:22:00.386 "name": "BaseBdev3", 00:22:00.386 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:22:00.386 "is_configured": true, 00:22:00.386 "data_offset": 2048, 00:22:00.386 "data_size": 63488 00:22:00.386 }, 00:22:00.386 { 00:22:00.386 "name": "BaseBdev4", 00:22:00.386 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:22:00.386 "is_configured": true, 00:22:00.386 "data_offset": 2048, 00:22:00.386 "data_size": 63488 00:22:00.386 } 00:22:00.386 ] 00:22:00.386 }' 00:22:00.386 13:07:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:00.386 13:07:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.322 13:07:19 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:01.322 [2024-06-11 13:07:20.059446] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:01.322 [2024-06-11 13:07:20.059499] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:01.322 [2024-06-11 13:07:20.059614] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:01.322 [2024-06-11 13:07:20.059738] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:01.322 [2024-06-11 13:07:20.059753] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:01.322 13:07:20 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.322 13:07:20 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:01.582 13:07:20 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:01.582 13:07:20 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:01.582 13:07:20 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@12 -- # local i 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:01.582 13:07:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:01.841 /dev/nbd0 00:22:01.841 13:07:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:01.841 13:07:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:01.841 13:07:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:01.841 13:07:20 -- common/autotest_common.sh@857 -- # local i 00:22:01.841 13:07:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:01.841 13:07:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:01.841 13:07:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:01.841 13:07:20 -- common/autotest_common.sh@861 -- # break 00:22:01.841 13:07:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:01.841 13:07:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:01.841 13:07:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.841 1+0 records in 00:22:01.841 1+0 records out 00:22:01.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495781 s, 8.3 MB/s 00:22:01.841 13:07:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.841 13:07:20 -- common/autotest_common.sh@874 -- # size=4096 00:22:01.841 13:07:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.841 13:07:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:01.841 13:07:20 -- common/autotest_common.sh@877 -- # return 0 00:22:01.841 13:07:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.841 13:07:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:01.841 13:07:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:02.100 /dev/nbd1 00:22:02.100 13:07:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:02.100 13:07:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:02.100 13:07:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:02.100 13:07:20 -- common/autotest_common.sh@857 -- # local i 00:22:02.100 13:07:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:02.100 13:07:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:02.100 13:07:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:02.100 13:07:20 -- common/autotest_common.sh@861 -- # break 00:22:02.100 13:07:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:02.100 13:07:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:02.100 13:07:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:02.100 1+0 records in 00:22:02.100 1+0 records out 00:22:02.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305419 s, 13.4 MB/s 00:22:02.100 13:07:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:02.100 13:07:20 -- common/autotest_common.sh@874 -- # size=4096 00:22:02.100 13:07:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:02.100 13:07:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:02.100 13:07:20 -- common/autotest_common.sh@877 -- # return 0 00:22:02.100 13:07:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:02.100 13:07:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:02.100 13:07:20 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:02.358 13:07:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:02.358 13:07:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:02.358 13:07:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:02.358 13:07:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:02.358 13:07:21 -- bdev/nbd_common.sh@51 -- # local i 00:22:02.358 13:07:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.358 13:07:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@41 -- # break 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.617 13:07:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@41 -- # break 00:22:02.875 13:07:21 -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.875 13:07:21 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:02.875 13:07:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:02.875 13:07:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:02.875 13:07:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:03.134 13:07:21 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:03.393 [2024-06-11 13:07:22.179491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:03.393 [2024-06-11 13:07:22.179589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.393 [2024-06-11 13:07:22.179640] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:03.393 [2024-06-11 13:07:22.179665] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.393 [2024-06-11 13:07:22.182203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.393 [2024-06-11 13:07:22.182273] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:03.393 [2024-06-11 13:07:22.182388] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:03.393 [2024-06-11 13:07:22.182453] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.393 BaseBdev1 00:22:03.393 13:07:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:03.393 13:07:22 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:03.393 13:07:22 -- bdev/bdev_raid.sh@696 -- # continue 00:22:03.393 13:07:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:03.393 13:07:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:03.393 13:07:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:03.651 13:07:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:03.909 [2024-06-11 13:07:22.607555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:03.909 [2024-06-11 13:07:22.607632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.909 [2024-06-11 13:07:22.607674] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:03.909 [2024-06-11 13:07:22.607698] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.909 [2024-06-11 13:07:22.608132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.909 [2024-06-11 13:07:22.608197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:03.909 [2024-06-11 13:07:22.608292] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:03.909 [2024-06-11 13:07:22.608307] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:03.909 [2024-06-11 13:07:22.608315] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:03.909 [2024-06-11 13:07:22.608341] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:22:03.909 [2024-06-11 13:07:22.608415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:03.909 BaseBdev3 00:22:03.909 13:07:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:03.909 13:07:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:03.909 13:07:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:04.166 13:07:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:04.166 [2024-06-11 13:07:23.003621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:04.166 [2024-06-11 13:07:23.003709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.166 [2024-06-11 13:07:23.003752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:04.166 [2024-06-11 13:07:23.003798] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.166 [2024-06-11 13:07:23.004299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.166 [2024-06-11 13:07:23.004364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:04.166 [2024-06-11 13:07:23.004459] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:04.166 [2024-06-11 13:07:23.004487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:04.424 BaseBdev4 00:22:04.424 13:07:23 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:04.424 13:07:23 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:04.684 [2024-06-11 13:07:23.384027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:04.684 [2024-06-11 13:07:23.384094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.684 [2024-06-11 13:07:23.384137] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:04.684 [2024-06-11 13:07:23.384171] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.684 [2024-06-11 13:07:23.384591] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.684 [2024-06-11 13:07:23.384641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:04.684 [2024-06-11 13:07:23.384740] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:04.684 [2024-06-11 13:07:23.384778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.684 spare 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.684 13:07:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.684 [2024-06-11 13:07:23.484887] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:22:04.684 [2024-06-11 13:07:23.484912] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:04.684 [2024-06-11 13:07:23.485030] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5f20 00:22:04.684 [2024-06-11 13:07:23.485487] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:22:04.684 [2024-06-11 13:07:23.485529] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:22:04.684 [2024-06-11 13:07:23.485686] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.942 13:07:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:04.942 "name": "raid_bdev1", 00:22:04.942 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:22:04.942 "strip_size_kb": 0, 00:22:04.942 "state": "online", 00:22:04.942 "raid_level": "raid1", 00:22:04.942 "superblock": true, 00:22:04.942 "num_base_bdevs": 4, 00:22:04.942 "num_base_bdevs_discovered": 3, 00:22:04.942 "num_base_bdevs_operational": 3, 00:22:04.942 "base_bdevs_list": [ 00:22:04.942 { 00:22:04.942 "name": "spare", 00:22:04.942 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:22:04.942 "is_configured": true, 00:22:04.942 "data_offset": 2048, 00:22:04.942 "data_size": 63488 00:22:04.942 }, 00:22:04.942 { 00:22:04.942 "name": null, 00:22:04.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.942 "is_configured": false, 00:22:04.942 "data_offset": 2048, 00:22:04.942 "data_size": 63488 00:22:04.942 }, 00:22:04.942 { 00:22:04.942 "name": "BaseBdev3", 00:22:04.942 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:22:04.942 "is_configured": true, 00:22:04.942 "data_offset": 2048, 00:22:04.942 "data_size": 63488 00:22:04.942 }, 00:22:04.942 { 00:22:04.942 "name": "BaseBdev4", 00:22:04.942 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:22:04.943 "is_configured": true, 00:22:04.943 "data_offset": 2048, 00:22:04.943 "data_size": 63488 00:22:04.943 } 00:22:04.943 ] 00:22:04.943 }' 00:22:04.943 13:07:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:04.943 13:07:23 -- common/autotest_common.sh@10 -- # set +x 00:22:05.510 13:07:24 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:05.510 13:07:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:05.510 13:07:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:05.510 13:07:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:05.510 13:07:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:05.510 13:07:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.510 13:07:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.768 13:07:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:05.768 "name": "raid_bdev1", 00:22:05.768 "uuid": "9340b3a7-bf4f-4862-994c-1522b86d6072", 00:22:05.768 "strip_size_kb": 0, 00:22:05.768 "state": "online", 00:22:05.768 "raid_level": "raid1", 00:22:05.768 "superblock": true, 00:22:05.768 "num_base_bdevs": 4, 00:22:05.768 "num_base_bdevs_discovered": 3, 00:22:05.768 "num_base_bdevs_operational": 3, 00:22:05.768 "base_bdevs_list": [ 00:22:05.768 { 00:22:05.768 "name": "spare", 00:22:05.768 "uuid": "a360138e-3fbc-5d6f-8dc7-ac3d23ccc85b", 00:22:05.768 "is_configured": true, 00:22:05.768 "data_offset": 2048, 00:22:05.768 "data_size": 63488 00:22:05.768 }, 00:22:05.768 { 00:22:05.768 "name": null, 00:22:05.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.768 "is_configured": false, 00:22:05.768 "data_offset": 2048, 00:22:05.768 "data_size": 63488 00:22:05.768 }, 00:22:05.768 { 00:22:05.768 "name": "BaseBdev3", 00:22:05.769 "uuid": "be6f7b59-9cad-5407-88f1-d07442cf7091", 00:22:05.769 "is_configured": true, 00:22:05.769 "data_offset": 2048, 00:22:05.769 "data_size": 63488 00:22:05.769 }, 00:22:05.769 { 00:22:05.769 "name": "BaseBdev4", 00:22:05.769 "uuid": "aee3e8ed-bf19-54a5-97f3-38dea5159a3d", 00:22:05.769 "is_configured": true, 00:22:05.769 "data_offset": 2048, 00:22:05.769 "data_size": 63488 00:22:05.769 } 00:22:05.769 ] 00:22:05.769 }' 00:22:05.769 13:07:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:06.027 13:07:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:06.027 13:07:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:06.027 13:07:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:06.027 13:07:24 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.027 13:07:24 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:06.286 13:07:24 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.286 13:07:24 -- bdev/bdev_raid.sh@709 -- # killprocess 128617 00:22:06.286 13:07:24 -- common/autotest_common.sh@926 -- # '[' -z 128617 ']' 00:22:06.286 13:07:24 -- common/autotest_common.sh@930 -- # kill -0 128617 00:22:06.286 13:07:24 -- common/autotest_common.sh@931 -- # uname 00:22:06.287 13:07:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:06.287 13:07:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128617 00:22:06.287 killing process with pid 128617 00:22:06.287 13:07:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:06.287 13:07:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:06.287 13:07:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128617' 00:22:06.287 13:07:24 -- common/autotest_common.sh@945 -- # kill 128617 00:22:06.287 13:07:24 -- common/autotest_common.sh@950 -- # wait 128617 00:22:06.287 Received shutdown signal, test time was about 60.000000 seconds 00:22:06.287 00:22:06.287 Latency(us) 00:22:06.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.287 =================================================================================================================== 00:22:06.287 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:06.287 [2024-06-11 13:07:24.905495] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:06.287 [2024-06-11 13:07:24.905634] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:06.287 [2024-06-11 13:07:24.905739] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:06.287 [2024-06-11 13:07:24.905792] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:22:06.546 [2024-06-11 13:07:25.250637] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:07.922 ************************************ 00:22:07.922 END TEST raid_rebuild_test_sb 00:22:07.922 ************************************ 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:07.922 00:22:07.922 real 0m28.078s 00:22:07.922 user 0m40.763s 00:22:07.922 sys 0m4.120s 00:22:07.922 13:07:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.922 13:07:26 -- common/autotest_common.sh@10 -- # set +x 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:07.922 13:07:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:07.922 13:07:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:07.922 13:07:26 -- common/autotest_common.sh@10 -- # set +x 00:22:07.922 ************************************ 00:22:07.922 START TEST raid_rebuild_test_io 00:22:07.922 ************************************ 00:22:07.922 13:07:26 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:07.922 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@544 -- # raid_pid=129312 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129312 /var/tmp/spdk-raid.sock 00:22:07.923 13:07:26 -- common/autotest_common.sh@819 -- # '[' -z 129312 ']' 00:22:07.923 13:07:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:07.923 13:07:26 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:07.923 13:07:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:07.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:07.923 13:07:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:07.923 13:07:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:07.923 13:07:26 -- common/autotest_common.sh@10 -- # set +x 00:22:07.923 [2024-06-11 13:07:26.477125] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:07.923 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:07.923 Zero copy mechanism will not be used. 00:22:07.923 [2024-06-11 13:07:26.477526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129312 ] 00:22:07.923 [2024-06-11 13:07:26.635590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.181 [2024-06-11 13:07:26.832996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.441 [2024-06-11 13:07:27.029532] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.699 13:07:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:08.699 13:07:27 -- common/autotest_common.sh@852 -- # return 0 00:22:08.699 13:07:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:08.699 13:07:27 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:08.699 13:07:27 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:08.958 BaseBdev1 00:22:08.958 13:07:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:08.958 13:07:27 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:08.958 13:07:27 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:09.217 BaseBdev2 00:22:09.217 13:07:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:09.217 13:07:28 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:09.217 13:07:28 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:09.476 BaseBdev3 00:22:09.476 13:07:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:09.476 13:07:28 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:09.476 13:07:28 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:09.734 BaseBdev4 00:22:09.993 13:07:28 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:09.993 spare_malloc 00:22:09.993 13:07:28 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:10.251 spare_delay 00:22:10.251 13:07:29 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:10.510 [2024-06-11 13:07:29.224618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:10.510 [2024-06-11 13:07:29.224727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.510 [2024-06-11 13:07:29.224766] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:10.510 [2024-06-11 13:07:29.224823] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.510 [2024-06-11 13:07:29.227305] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.510 [2024-06-11 13:07:29.227358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:10.510 spare 00:22:10.510 13:07:29 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:10.769 [2024-06-11 13:07:29.460712] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:10.769 [2024-06-11 13:07:29.462750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:10.769 [2024-06-11 13:07:29.462810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:10.769 [2024-06-11 13:07:29.462855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:10.769 [2024-06-11 13:07:29.462926] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:10.769 [2024-06-11 13:07:29.462940] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:10.769 [2024-06-11 13:07:29.463090] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:10.769 [2024-06-11 13:07:29.463449] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:10.769 [2024-06-11 13:07:29.463466] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:22:10.769 [2024-06-11 13:07:29.463623] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.769 13:07:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.028 13:07:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.028 "name": "raid_bdev1", 00:22:11.028 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:11.028 "strip_size_kb": 0, 00:22:11.028 "state": "online", 00:22:11.028 "raid_level": "raid1", 00:22:11.028 "superblock": false, 00:22:11.028 "num_base_bdevs": 4, 00:22:11.028 "num_base_bdevs_discovered": 4, 00:22:11.028 "num_base_bdevs_operational": 4, 00:22:11.028 "base_bdevs_list": [ 00:22:11.028 { 00:22:11.028 "name": "BaseBdev1", 00:22:11.028 "uuid": "cbcef747-a58e-4a2e-a0fe-039782058d4c", 00:22:11.028 "is_configured": true, 00:22:11.028 "data_offset": 0, 00:22:11.028 "data_size": 65536 00:22:11.028 }, 00:22:11.028 { 00:22:11.028 "name": "BaseBdev2", 00:22:11.028 "uuid": "07f96487-0325-47fd-b0da-acda8cdefa2f", 00:22:11.028 "is_configured": true, 00:22:11.028 "data_offset": 0, 00:22:11.028 "data_size": 65536 00:22:11.028 }, 00:22:11.028 { 00:22:11.028 "name": "BaseBdev3", 00:22:11.028 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:11.028 "is_configured": true, 00:22:11.028 "data_offset": 0, 00:22:11.028 "data_size": 65536 00:22:11.028 }, 00:22:11.028 { 00:22:11.028 "name": "BaseBdev4", 00:22:11.028 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:11.028 "is_configured": true, 00:22:11.028 "data_offset": 0, 00:22:11.028 "data_size": 65536 00:22:11.028 } 00:22:11.028 ] 00:22:11.028 }' 00:22:11.028 13:07:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.028 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:22:11.595 13:07:30 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:11.595 13:07:30 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:11.860 [2024-06-11 13:07:30.449086] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.860 13:07:30 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:11.860 13:07:30 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.860 13:07:30 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:11.860 13:07:30 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:11.860 13:07:30 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:11.860 13:07:30 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:11.860 13:07:30 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:12.141 [2024-06-11 13:07:30.752560] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:12.141 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:12.141 Zero copy mechanism will not be used. 00:22:12.141 Running I/O for 60 seconds... 00:22:12.141 [2024-06-11 13:07:30.825471] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:12.141 [2024-06-11 13:07:30.825730] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.141 13:07:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.399 13:07:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.399 "name": "raid_bdev1", 00:22:12.399 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:12.400 "strip_size_kb": 0, 00:22:12.400 "state": "online", 00:22:12.400 "raid_level": "raid1", 00:22:12.400 "superblock": false, 00:22:12.400 "num_base_bdevs": 4, 00:22:12.400 "num_base_bdevs_discovered": 3, 00:22:12.400 "num_base_bdevs_operational": 3, 00:22:12.400 "base_bdevs_list": [ 00:22:12.400 { 00:22:12.400 "name": null, 00:22:12.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.400 "is_configured": false, 00:22:12.400 "data_offset": 0, 00:22:12.400 "data_size": 65536 00:22:12.400 }, 00:22:12.400 { 00:22:12.400 "name": "BaseBdev2", 00:22:12.400 "uuid": "07f96487-0325-47fd-b0da-acda8cdefa2f", 00:22:12.400 "is_configured": true, 00:22:12.400 "data_offset": 0, 00:22:12.400 "data_size": 65536 00:22:12.400 }, 00:22:12.400 { 00:22:12.400 "name": "BaseBdev3", 00:22:12.400 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:12.400 "is_configured": true, 00:22:12.400 "data_offset": 0, 00:22:12.400 "data_size": 65536 00:22:12.400 }, 00:22:12.400 { 00:22:12.400 "name": "BaseBdev4", 00:22:12.400 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:12.400 "is_configured": true, 00:22:12.400 "data_offset": 0, 00:22:12.400 "data_size": 65536 00:22:12.400 } 00:22:12.400 ] 00:22:12.400 }' 00:22:12.400 13:07:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.400 13:07:31 -- common/autotest_common.sh@10 -- # set +x 00:22:12.967 13:07:31 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:13.225 [2024-06-11 13:07:31.985941] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:13.225 [2024-06-11 13:07:31.986044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:13.225 13:07:32 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:13.225 [2024-06-11 13:07:32.024028] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:13.225 [2024-06-11 13:07:32.026076] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:13.482 [2024-06-11 13:07:32.157034] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:13.482 [2024-06-11 13:07:32.289719] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:13.482 [2024-06-11 13:07:32.290570] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:14.415 [2024-06-11 13:07:32.999511] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:14.415 13:07:33 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.415 13:07:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.415 13:07:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:14.415 13:07:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:14.415 13:07:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.415 13:07:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.415 13:07:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.415 [2024-06-11 13:07:33.106883] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:14.673 13:07:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.673 "name": "raid_bdev1", 00:22:14.673 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:14.673 "strip_size_kb": 0, 00:22:14.673 "state": "online", 00:22:14.673 "raid_level": "raid1", 00:22:14.673 "superblock": false, 00:22:14.673 "num_base_bdevs": 4, 00:22:14.673 "num_base_bdevs_discovered": 4, 00:22:14.673 "num_base_bdevs_operational": 4, 00:22:14.673 "process": { 00:22:14.673 "type": "rebuild", 00:22:14.673 "target": "spare", 00:22:14.673 "progress": { 00:22:14.673 "blocks": 18432, 00:22:14.673 "percent": 28 00:22:14.673 } 00:22:14.673 }, 00:22:14.673 "base_bdevs_list": [ 00:22:14.673 { 00:22:14.673 "name": "spare", 00:22:14.673 "uuid": "cf62462f-ddb6-58dd-b029-c193ebb60197", 00:22:14.673 "is_configured": true, 00:22:14.673 "data_offset": 0, 00:22:14.673 "data_size": 65536 00:22:14.673 }, 00:22:14.673 { 00:22:14.673 "name": "BaseBdev2", 00:22:14.673 "uuid": "07f96487-0325-47fd-b0da-acda8cdefa2f", 00:22:14.673 "is_configured": true, 00:22:14.673 "data_offset": 0, 00:22:14.673 "data_size": 65536 00:22:14.673 }, 00:22:14.673 { 00:22:14.673 "name": "BaseBdev3", 00:22:14.673 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:14.673 "is_configured": true, 00:22:14.673 "data_offset": 0, 00:22:14.673 "data_size": 65536 00:22:14.673 }, 00:22:14.673 { 00:22:14.673 "name": "BaseBdev4", 00:22:14.673 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:14.673 "is_configured": true, 00:22:14.673 "data_offset": 0, 00:22:14.673 "data_size": 65536 00:22:14.673 } 00:22:14.673 ] 00:22:14.673 }' 00:22:14.673 13:07:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.673 13:07:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.673 13:07:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.673 13:07:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.673 13:07:33 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:14.673 [2024-06-11 13:07:33.499313] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:14.932 [2024-06-11 13:07:33.583433] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:14.932 [2024-06-11 13:07:33.620710] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:14.932 [2024-06-11 13:07:33.720814] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:14.932 [2024-06-11 13:07:33.732172] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.932 [2024-06-11 13:07:33.763214] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:15.190 "name": "raid_bdev1", 00:22:15.190 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:15.190 "strip_size_kb": 0, 00:22:15.190 "state": "online", 00:22:15.190 "raid_level": "raid1", 00:22:15.190 "superblock": false, 00:22:15.190 "num_base_bdevs": 4, 00:22:15.190 "num_base_bdevs_discovered": 3, 00:22:15.190 "num_base_bdevs_operational": 3, 00:22:15.190 "base_bdevs_list": [ 00:22:15.190 { 00:22:15.190 "name": null, 00:22:15.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.190 "is_configured": false, 00:22:15.190 "data_offset": 0, 00:22:15.190 "data_size": 65536 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "name": "BaseBdev2", 00:22:15.190 "uuid": "07f96487-0325-47fd-b0da-acda8cdefa2f", 00:22:15.190 "is_configured": true, 00:22:15.190 "data_offset": 0, 00:22:15.190 "data_size": 65536 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "name": "BaseBdev3", 00:22:15.190 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:15.190 "is_configured": true, 00:22:15.190 "data_offset": 0, 00:22:15.190 "data_size": 65536 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "name": "BaseBdev4", 00:22:15.190 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:15.190 "is_configured": true, 00:22:15.190 "data_offset": 0, 00:22:15.190 "data_size": 65536 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 }' 00:22:15.190 13:07:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:15.190 13:07:33 -- common/autotest_common.sh@10 -- # set +x 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:16.126 "name": "raid_bdev1", 00:22:16.126 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:16.126 "strip_size_kb": 0, 00:22:16.126 "state": "online", 00:22:16.126 "raid_level": "raid1", 00:22:16.126 "superblock": false, 00:22:16.126 "num_base_bdevs": 4, 00:22:16.126 "num_base_bdevs_discovered": 3, 00:22:16.126 "num_base_bdevs_operational": 3, 00:22:16.126 "base_bdevs_list": [ 00:22:16.126 { 00:22:16.126 "name": null, 00:22:16.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.126 "is_configured": false, 00:22:16.126 "data_offset": 0, 00:22:16.126 "data_size": 65536 00:22:16.126 }, 00:22:16.126 { 00:22:16.126 "name": "BaseBdev2", 00:22:16.126 "uuid": "07f96487-0325-47fd-b0da-acda8cdefa2f", 00:22:16.126 "is_configured": true, 00:22:16.126 "data_offset": 0, 00:22:16.126 "data_size": 65536 00:22:16.126 }, 00:22:16.126 { 00:22:16.126 "name": "BaseBdev3", 00:22:16.126 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:16.126 "is_configured": true, 00:22:16.126 "data_offset": 0, 00:22:16.126 "data_size": 65536 00:22:16.126 }, 00:22:16.126 { 00:22:16.126 "name": "BaseBdev4", 00:22:16.126 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:16.126 "is_configured": true, 00:22:16.126 "data_offset": 0, 00:22:16.126 "data_size": 65536 00:22:16.126 } 00:22:16.126 ] 00:22:16.126 }' 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:16.126 13:07:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:16.385 13:07:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:16.385 13:07:34 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:16.643 [2024-06-11 13:07:35.247382] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:16.643 [2024-06-11 13:07:35.247443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:16.643 13:07:35 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:16.643 [2024-06-11 13:07:35.290957] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:16.643 [2024-06-11 13:07:35.292924] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:16.643 [2024-06-11 13:07:35.407828] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:16.643 [2024-06-11 13:07:35.409171] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:16.900 [2024-06-11 13:07:35.626689] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:16.900 [2024-06-11 13:07:35.627439] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:17.465 [2024-06-11 13:07:36.081540] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:17.465 13:07:36 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.465 13:07:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:17.465 13:07:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:17.465 13:07:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:17.465 13:07:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:17.465 13:07:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.465 13:07:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.724 [2024-06-11 13:07:36.420877] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:17.724 13:07:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:17.724 "name": "raid_bdev1", 00:22:17.724 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:17.724 "strip_size_kb": 0, 00:22:17.724 "state": "online", 00:22:17.724 "raid_level": "raid1", 00:22:17.724 "superblock": false, 00:22:17.724 "num_base_bdevs": 4, 00:22:17.724 "num_base_bdevs_discovered": 4, 00:22:17.724 "num_base_bdevs_operational": 4, 00:22:17.724 "process": { 00:22:17.724 "type": "rebuild", 00:22:17.724 "target": "spare", 00:22:17.724 "progress": { 00:22:17.724 "blocks": 14336, 00:22:17.724 "percent": 21 00:22:17.724 } 00:22:17.724 }, 00:22:17.724 "base_bdevs_list": [ 00:22:17.724 { 00:22:17.724 "name": "spare", 00:22:17.724 "uuid": "cf62462f-ddb6-58dd-b029-c193ebb60197", 00:22:17.724 "is_configured": true, 00:22:17.724 "data_offset": 0, 00:22:17.724 "data_size": 65536 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "name": "BaseBdev2", 00:22:17.724 "uuid": "07f96487-0325-47fd-b0da-acda8cdefa2f", 00:22:17.724 "is_configured": true, 00:22:17.724 "data_offset": 0, 00:22:17.724 "data_size": 65536 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "name": "BaseBdev3", 00:22:17.724 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:17.724 "is_configured": true, 00:22:17.724 "data_offset": 0, 00:22:17.724 "data_size": 65536 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "name": "BaseBdev4", 00:22:17.724 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:17.724 "is_configured": true, 00:22:17.724 "data_offset": 0, 00:22:17.724 "data_size": 65536 00:22:17.724 } 00:22:17.724 ] 00:22:17.724 }' 00:22:17.724 13:07:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:17.982 13:07:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.982 13:07:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:17.982 [2024-06-11 13:07:36.623864] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:17.982 [2024-06-11 13:07:36.624251] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:17.982 13:07:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.982 13:07:36 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:17.982 13:07:36 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:17.982 13:07:36 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:17.982 13:07:36 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:17.982 13:07:36 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:18.240 [2024-06-11 13:07:36.855064] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:18.240 [2024-06-11 13:07:36.955218] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:18.240 [2024-06-11 13:07:36.980406] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005930 00:22:18.240 [2024-06-11 13:07:36.980455] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:22:18.240 [2024-06-11 13:07:36.982207] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:18.240 13:07:36 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:18.240 13:07:36 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:18.240 13:07:36 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.240 13:07:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:18.240 13:07:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:18.240 13:07:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:18.240 13:07:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:18.240 13:07:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.240 13:07:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.498 13:07:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:18.498 "name": "raid_bdev1", 00:22:18.498 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:18.498 "strip_size_kb": 0, 00:22:18.498 "state": "online", 00:22:18.498 "raid_level": "raid1", 00:22:18.498 "superblock": false, 00:22:18.498 "num_base_bdevs": 4, 00:22:18.498 "num_base_bdevs_discovered": 3, 00:22:18.498 "num_base_bdevs_operational": 3, 00:22:18.498 "process": { 00:22:18.498 "type": "rebuild", 00:22:18.498 "target": "spare", 00:22:18.498 "progress": { 00:22:18.498 "blocks": 24576, 00:22:18.498 "percent": 37 00:22:18.498 } 00:22:18.498 }, 00:22:18.498 "base_bdevs_list": [ 00:22:18.498 { 00:22:18.498 "name": "spare", 00:22:18.498 "uuid": "cf62462f-ddb6-58dd-b029-c193ebb60197", 00:22:18.498 "is_configured": true, 00:22:18.498 "data_offset": 0, 00:22:18.498 "data_size": 65536 00:22:18.498 }, 00:22:18.498 { 00:22:18.498 "name": null, 00:22:18.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.498 "is_configured": false, 00:22:18.498 "data_offset": 0, 00:22:18.498 "data_size": 65536 00:22:18.498 }, 00:22:18.498 { 00:22:18.498 "name": "BaseBdev3", 00:22:18.498 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:18.498 "is_configured": true, 00:22:18.498 "data_offset": 0, 00:22:18.498 "data_size": 65536 00:22:18.498 }, 00:22:18.498 { 00:22:18.498 "name": "BaseBdev4", 00:22:18.498 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:18.498 "is_configured": true, 00:22:18.498 "data_offset": 0, 00:22:18.498 "data_size": 65536 00:22:18.498 } 00:22:18.498 ] 00:22:18.498 }' 00:22:18.498 13:07:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:18.498 13:07:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:18.498 13:07:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@657 -- # local timeout=529 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.756 [2024-06-11 13:07:37.409880] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:18.756 "name": "raid_bdev1", 00:22:18.756 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:18.756 "strip_size_kb": 0, 00:22:18.756 "state": "online", 00:22:18.756 "raid_level": "raid1", 00:22:18.756 "superblock": false, 00:22:18.756 "num_base_bdevs": 4, 00:22:18.756 "num_base_bdevs_discovered": 3, 00:22:18.756 "num_base_bdevs_operational": 3, 00:22:18.756 "process": { 00:22:18.756 "type": "rebuild", 00:22:18.756 "target": "spare", 00:22:18.756 "progress": { 00:22:18.756 "blocks": 30720, 00:22:18.756 "percent": 46 00:22:18.756 } 00:22:18.756 }, 00:22:18.756 "base_bdevs_list": [ 00:22:18.756 { 00:22:18.756 "name": "spare", 00:22:18.756 "uuid": "cf62462f-ddb6-58dd-b029-c193ebb60197", 00:22:18.756 "is_configured": true, 00:22:18.756 "data_offset": 0, 00:22:18.756 "data_size": 65536 00:22:18.756 }, 00:22:18.756 { 00:22:18.756 "name": null, 00:22:18.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.756 "is_configured": false, 00:22:18.756 "data_offset": 0, 00:22:18.756 "data_size": 65536 00:22:18.756 }, 00:22:18.756 { 00:22:18.756 "name": "BaseBdev3", 00:22:18.756 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:18.756 "is_configured": true, 00:22:18.756 "data_offset": 0, 00:22:18.756 "data_size": 65536 00:22:18.756 }, 00:22:18.756 { 00:22:18.756 "name": "BaseBdev4", 00:22:18.756 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:18.756 "is_configured": true, 00:22:18.756 "data_offset": 0, 00:22:18.756 "data_size": 65536 00:22:18.756 } 00:22:18.756 ] 00:22:18.756 }' 00:22:18.756 13:07:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:19.014 13:07:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.014 13:07:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:19.014 13:07:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.014 13:07:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:19.272 [2024-06-11 13:07:37.961603] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:19.530 [2024-06-11 13:07:38.172200] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:19.788 [2024-06-11 13:07:38.491406] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:20.046 13:07:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:20.046 13:07:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.046 13:07:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:20.046 13:07:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:20.046 13:07:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:20.046 13:07:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:20.046 13:07:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.046 13:07:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.046 [2024-06-11 13:07:38.718705] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:20.303 13:07:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:20.303 "name": "raid_bdev1", 00:22:20.303 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:20.303 "strip_size_kb": 0, 00:22:20.303 "state": "online", 00:22:20.303 "raid_level": "raid1", 00:22:20.303 "superblock": false, 00:22:20.303 "num_base_bdevs": 4, 00:22:20.303 "num_base_bdevs_discovered": 3, 00:22:20.303 "num_base_bdevs_operational": 3, 00:22:20.303 "process": { 00:22:20.303 "type": "rebuild", 00:22:20.303 "target": "spare", 00:22:20.303 "progress": { 00:22:20.303 "blocks": 49152, 00:22:20.303 "percent": 75 00:22:20.303 } 00:22:20.303 }, 00:22:20.303 "base_bdevs_list": [ 00:22:20.303 { 00:22:20.304 "name": "spare", 00:22:20.304 "uuid": "cf62462f-ddb6-58dd-b029-c193ebb60197", 00:22:20.304 "is_configured": true, 00:22:20.304 "data_offset": 0, 00:22:20.304 "data_size": 65536 00:22:20.304 }, 00:22:20.304 { 00:22:20.304 "name": null, 00:22:20.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.304 "is_configured": false, 00:22:20.304 "data_offset": 0, 00:22:20.304 "data_size": 65536 00:22:20.304 }, 00:22:20.304 { 00:22:20.304 "name": "BaseBdev3", 00:22:20.304 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:20.304 "is_configured": true, 00:22:20.304 "data_offset": 0, 00:22:20.304 "data_size": 65536 00:22:20.304 }, 00:22:20.304 { 00:22:20.304 "name": "BaseBdev4", 00:22:20.304 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:20.304 "is_configured": true, 00:22:20.304 "data_offset": 0, 00:22:20.304 "data_size": 65536 00:22:20.304 } 00:22:20.304 ] 00:22:20.304 }' 00:22:20.304 13:07:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:20.304 13:07:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.304 13:07:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:20.304 13:07:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.304 13:07:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:20.561 [2024-06-11 13:07:39.271792] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:21.128 [2024-06-11 13:07:39.714477] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:21.128 [2024-06-11 13:07:39.821184] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:21.128 [2024-06-11 13:07:39.823921] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.386 13:07:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:21.387 13:07:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.387 13:07:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:21.387 13:07:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:21.387 13:07:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:21.387 13:07:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:21.387 13:07:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.387 13:07:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:21.645 "name": "raid_bdev1", 00:22:21.645 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:21.645 "strip_size_kb": 0, 00:22:21.645 "state": "online", 00:22:21.645 "raid_level": "raid1", 00:22:21.645 "superblock": false, 00:22:21.645 "num_base_bdevs": 4, 00:22:21.645 "num_base_bdevs_discovered": 3, 00:22:21.645 "num_base_bdevs_operational": 3, 00:22:21.645 "base_bdevs_list": [ 00:22:21.645 { 00:22:21.645 "name": "spare", 00:22:21.645 "uuid": "cf62462f-ddb6-58dd-b029-c193ebb60197", 00:22:21.645 "is_configured": true, 00:22:21.645 "data_offset": 0, 00:22:21.645 "data_size": 65536 00:22:21.645 }, 00:22:21.645 { 00:22:21.645 "name": null, 00:22:21.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.645 "is_configured": false, 00:22:21.645 "data_offset": 0, 00:22:21.645 "data_size": 65536 00:22:21.645 }, 00:22:21.645 { 00:22:21.645 "name": "BaseBdev3", 00:22:21.645 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:21.645 "is_configured": true, 00:22:21.645 "data_offset": 0, 00:22:21.645 "data_size": 65536 00:22:21.645 }, 00:22:21.645 { 00:22:21.645 "name": "BaseBdev4", 00:22:21.645 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:21.645 "is_configured": true, 00:22:21.645 "data_offset": 0, 00:22:21.645 "data_size": 65536 00:22:21.645 } 00:22:21.645 ] 00:22:21.645 }' 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@660 -- # break 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.645 13:07:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:21.904 "name": "raid_bdev1", 00:22:21.904 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:21.904 "strip_size_kb": 0, 00:22:21.904 "state": "online", 00:22:21.904 "raid_level": "raid1", 00:22:21.904 "superblock": false, 00:22:21.904 "num_base_bdevs": 4, 00:22:21.904 "num_base_bdevs_discovered": 3, 00:22:21.904 "num_base_bdevs_operational": 3, 00:22:21.904 "base_bdevs_list": [ 00:22:21.904 { 00:22:21.904 "name": "spare", 00:22:21.904 "uuid": "cf62462f-ddb6-58dd-b029-c193ebb60197", 00:22:21.904 "is_configured": true, 00:22:21.904 "data_offset": 0, 00:22:21.904 "data_size": 65536 00:22:21.904 }, 00:22:21.904 { 00:22:21.904 "name": null, 00:22:21.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.904 "is_configured": false, 00:22:21.904 "data_offset": 0, 00:22:21.904 "data_size": 65536 00:22:21.904 }, 00:22:21.904 { 00:22:21.904 "name": "BaseBdev3", 00:22:21.904 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:21.904 "is_configured": true, 00:22:21.904 "data_offset": 0, 00:22:21.904 "data_size": 65536 00:22:21.904 }, 00:22:21.904 { 00:22:21.904 "name": "BaseBdev4", 00:22:21.904 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:21.904 "is_configured": true, 00:22:21.904 "data_offset": 0, 00:22:21.904 "data_size": 65536 00:22:21.904 } 00:22:21.904 ] 00:22:21.904 }' 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.904 13:07:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.162 13:07:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.162 "name": "raid_bdev1", 00:22:22.162 "uuid": "d827784e-ed52-4952-8295-7be16cd5cbcf", 00:22:22.162 "strip_size_kb": 0, 00:22:22.162 "state": "online", 00:22:22.162 "raid_level": "raid1", 00:22:22.162 "superblock": false, 00:22:22.162 "num_base_bdevs": 4, 00:22:22.162 "num_base_bdevs_discovered": 3, 00:22:22.162 "num_base_bdevs_operational": 3, 00:22:22.162 "base_bdevs_list": [ 00:22:22.162 { 00:22:22.162 "name": "spare", 00:22:22.162 "uuid": "cf62462f-ddb6-58dd-b029-c193ebb60197", 00:22:22.162 "is_configured": true, 00:22:22.162 "data_offset": 0, 00:22:22.162 "data_size": 65536 00:22:22.162 }, 00:22:22.162 { 00:22:22.162 "name": null, 00:22:22.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.162 "is_configured": false, 00:22:22.162 "data_offset": 0, 00:22:22.162 "data_size": 65536 00:22:22.162 }, 00:22:22.162 { 00:22:22.162 "name": "BaseBdev3", 00:22:22.162 "uuid": "e68e4f0b-c1db-4276-96ef-2265f7f3c5d7", 00:22:22.162 "is_configured": true, 00:22:22.162 "data_offset": 0, 00:22:22.162 "data_size": 65536 00:22:22.162 }, 00:22:22.162 { 00:22:22.162 "name": "BaseBdev4", 00:22:22.162 "uuid": "2028d8ae-f91b-491c-a71d-c432d61830ef", 00:22:22.162 "is_configured": true, 00:22:22.162 "data_offset": 0, 00:22:22.162 "data_size": 65536 00:22:22.162 } 00:22:22.162 ] 00:22:22.162 }' 00:22:22.162 13:07:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.162 13:07:40 -- common/autotest_common.sh@10 -- # set +x 00:22:22.727 13:07:41 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:22.986 [2024-06-11 13:07:41.705058] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.986 [2024-06-11 13:07:41.705097] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.986 00:22:22.986 Latency(us) 00:22:22.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.986 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:22.986 raid_bdev1 : 11.04 101.10 303.31 0.00 0.00 13920.01 307.20 111530.36 00:22:22.986 =================================================================================================================== 00:22:22.986 Total : 101.10 303.31 0.00 0.00 13920.01 307.20 111530.36 00:22:22.986 [2024-06-11 13:07:41.809102] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.986 [2024-06-11 13:07:41.809292] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.986 [2024-06-11 13:07:41.809418] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.986 0 00:22:22.986 [2024-06-11 13:07:41.809725] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:23.245 13:07:41 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.245 13:07:41 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:23.245 13:07:42 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:23.245 13:07:42 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:23.245 13:07:42 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@12 -- # local i 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.245 13:07:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:23.503 /dev/nbd0 00:22:23.503 13:07:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:23.503 13:07:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:23.503 13:07:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:23.503 13:07:42 -- common/autotest_common.sh@857 -- # local i 00:22:23.503 13:07:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:23.504 13:07:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:23.504 13:07:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:23.504 13:07:42 -- common/autotest_common.sh@861 -- # break 00:22:23.504 13:07:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:23.504 13:07:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:23.504 13:07:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.504 1+0 records in 00:22:23.504 1+0 records out 00:22:23.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524408 s, 7.8 MB/s 00:22:23.504 13:07:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.504 13:07:42 -- common/autotest_common.sh@874 -- # size=4096 00:22:23.504 13:07:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.504 13:07:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:23.504 13:07:42 -- common/autotest_common.sh@877 -- # return 0 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.504 13:07:42 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:23.504 13:07:42 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:23.504 13:07:42 -- bdev/bdev_raid.sh@678 -- # continue 00:22:23.504 13:07:42 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:23.504 13:07:42 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:23.504 13:07:42 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@12 -- # local i 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.504 13:07:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:23.775 /dev/nbd1 00:22:23.775 13:07:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:23.775 13:07:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:23.775 13:07:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:23.775 13:07:42 -- common/autotest_common.sh@857 -- # local i 00:22:23.775 13:07:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:23.775 13:07:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:23.775 13:07:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:23.775 13:07:42 -- common/autotest_common.sh@861 -- # break 00:22:23.775 13:07:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:23.775 13:07:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:23.775 13:07:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.775 1+0 records in 00:22:23.775 1+0 records out 00:22:23.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531891 s, 7.7 MB/s 00:22:23.775 13:07:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.775 13:07:42 -- common/autotest_common.sh@874 -- # size=4096 00:22:23.775 13:07:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.775 13:07:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:23.776 13:07:42 -- common/autotest_common.sh@877 -- # return 0 00:22:23.776 13:07:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.776 13:07:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.776 13:07:42 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:24.046 13:07:42 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:24.046 13:07:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:24.046 13:07:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:24.046 13:07:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:24.046 13:07:42 -- bdev/nbd_common.sh@51 -- # local i 00:22:24.046 13:07:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.046 13:07:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:24.305 13:07:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:24.305 13:07:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:24.305 13:07:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:24.305 13:07:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.305 13:07:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.305 13:07:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:24.305 13:07:42 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@41 -- # break 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.305 13:07:43 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:24.305 13:07:43 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:24.305 13:07:43 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@12 -- # local i 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:24.305 13:07:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:24.563 /dev/nbd1 00:22:24.563 13:07:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:24.563 13:07:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:24.563 13:07:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:24.563 13:07:43 -- common/autotest_common.sh@857 -- # local i 00:22:24.563 13:07:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:24.563 13:07:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:24.563 13:07:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:24.563 13:07:43 -- common/autotest_common.sh@861 -- # break 00:22:24.563 13:07:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:24.563 13:07:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:24.563 13:07:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:24.563 1+0 records in 00:22:24.563 1+0 records out 00:22:24.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496457 s, 8.3 MB/s 00:22:24.563 13:07:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.563 13:07:43 -- common/autotest_common.sh@874 -- # size=4096 00:22:24.563 13:07:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.563 13:07:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:24.563 13:07:43 -- common/autotest_common.sh@877 -- # return 0 00:22:24.563 13:07:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:24.563 13:07:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:24.563 13:07:43 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:24.821 13:07:43 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:24.821 13:07:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:24.821 13:07:43 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:24.821 13:07:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:24.821 13:07:43 -- bdev/nbd_common.sh@51 -- # local i 00:22:24.821 13:07:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.821 13:07:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@41 -- # break 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.080 13:07:43 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@51 -- # local i 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.080 13:07:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@41 -- # break 00:22:25.339 13:07:44 -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.339 13:07:44 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:25.339 13:07:44 -- bdev/bdev_raid.sh@709 -- # killprocess 129312 00:22:25.339 13:07:44 -- common/autotest_common.sh@926 -- # '[' -z 129312 ']' 00:22:25.339 13:07:44 -- common/autotest_common.sh@930 -- # kill -0 129312 00:22:25.339 13:07:44 -- common/autotest_common.sh@931 -- # uname 00:22:25.339 13:07:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:25.339 13:07:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129312 00:22:25.339 13:07:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:25.339 13:07:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:25.339 13:07:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129312' 00:22:25.339 killing process with pid 129312 00:22:25.339 13:07:44 -- common/autotest_common.sh@945 -- # kill 129312 00:22:25.339 Received shutdown signal, test time was about 13.401808 seconds 00:22:25.339 00:22:25.339 Latency(us) 00:22:25.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.339 =================================================================================================================== 00:22:25.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.339 13:07:44 -- common/autotest_common.sh@950 -- # wait 129312 00:22:25.339 [2024-06-11 13:07:44.156535] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:25.906 [2024-06-11 13:07:44.464042] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.842 ************************************ 00:22:26.842 END TEST raid_rebuild_test_io 00:22:26.842 ************************************ 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:26.842 00:22:26.842 real 0m19.208s 00:22:26.842 user 0m29.853s 00:22:26.842 sys 0m2.152s 00:22:26.842 13:07:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.842 13:07:45 -- common/autotest_common.sh@10 -- # set +x 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:26.842 13:07:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:26.842 13:07:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:26.842 13:07:45 -- common/autotest_common.sh@10 -- # set +x 00:22:26.842 ************************************ 00:22:26.842 START TEST raid_rebuild_test_sb_io 00:22:26.842 ************************************ 00:22:26.842 13:07:45 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=129867 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:26.842 13:07:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129867 /var/tmp/spdk-raid.sock 00:22:26.842 13:07:45 -- common/autotest_common.sh@819 -- # '[' -z 129867 ']' 00:22:26.842 13:07:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:26.842 13:07:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:26.842 13:07:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:26.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:26.842 13:07:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:26.842 13:07:45 -- common/autotest_common.sh@10 -- # set +x 00:22:27.101 [2024-06-11 13:07:45.718820] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:27.101 [2024-06-11 13:07:45.719259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129867 ] 00:22:27.101 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:27.101 Zero copy mechanism will not be used. 00:22:27.101 [2024-06-11 13:07:45.878090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.360 [2024-06-11 13:07:46.080548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.618 [2024-06-11 13:07:46.272055] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.877 13:07:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:27.877 13:07:46 -- common/autotest_common.sh@852 -- # return 0 00:22:27.877 13:07:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:27.877 13:07:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:27.877 13:07:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:28.134 BaseBdev1_malloc 00:22:28.134 13:07:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:28.392 [2024-06-11 13:07:47.017543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:28.392 [2024-06-11 13:07:47.017940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.392 [2024-06-11 13:07:47.018135] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:28.392 [2024-06-11 13:07:47.018294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.392 [2024-06-11 13:07:47.020927] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.392 [2024-06-11 13:07:47.021121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:28.392 BaseBdev1 00:22:28.392 13:07:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:28.392 13:07:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:28.392 13:07:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:28.650 BaseBdev2_malloc 00:22:28.650 13:07:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:28.650 [2024-06-11 13:07:47.484014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:28.650 [2024-06-11 13:07:47.484324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.650 [2024-06-11 13:07:47.484410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:28.650 [2024-06-11 13:07:47.484662] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.650 [2024-06-11 13:07:47.487133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.650 [2024-06-11 13:07:47.487293] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:28.650 BaseBdev2 00:22:28.909 13:07:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:28.909 13:07:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:28.909 13:07:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:28.909 BaseBdev3_malloc 00:22:28.909 13:07:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:29.167 [2024-06-11 13:07:47.891601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:29.167 [2024-06-11 13:07:47.891939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.167 [2024-06-11 13:07:47.892020] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:29.167 [2024-06-11 13:07:47.892293] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.167 [2024-06-11 13:07:47.894943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.167 [2024-06-11 13:07:47.895122] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:29.167 BaseBdev3 00:22:29.167 13:07:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:29.167 13:07:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:29.167 13:07:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:29.425 BaseBdev4_malloc 00:22:29.425 13:07:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:29.684 [2024-06-11 13:07:48.362287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:29.684 [2024-06-11 13:07:48.362551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.684 [2024-06-11 13:07:48.362625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:29.684 [2024-06-11 13:07:48.362874] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.684 [2024-06-11 13:07:48.365363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.684 [2024-06-11 13:07:48.365571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:29.684 BaseBdev4 00:22:29.684 13:07:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:29.942 spare_malloc 00:22:29.942 13:07:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:30.201 spare_delay 00:22:30.201 13:07:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:30.460 [2024-06-11 13:07:49.055380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:30.460 [2024-06-11 13:07:49.055709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.460 [2024-06-11 13:07:49.055780] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:30.460 [2024-06-11 13:07:49.056036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.460 [2024-06-11 13:07:49.058493] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.460 [2024-06-11 13:07:49.058679] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:30.460 spare 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:30.460 [2024-06-11 13:07:49.243534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.460 [2024-06-11 13:07:49.245597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.460 [2024-06-11 13:07:49.245810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:30.460 [2024-06-11 13:07:49.245981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:30.460 [2024-06-11 13:07:49.246258] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:30.460 [2024-06-11 13:07:49.246303] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:30.460 [2024-06-11 13:07:49.246496] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:30.460 [2024-06-11 13:07:49.246956] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:30.460 [2024-06-11 13:07:49.247067] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:30.460 [2024-06-11 13:07:49.247278] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.460 13:07:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.719 13:07:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.719 "name": "raid_bdev1", 00:22:30.719 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:30.719 "strip_size_kb": 0, 00:22:30.719 "state": "online", 00:22:30.719 "raid_level": "raid1", 00:22:30.719 "superblock": true, 00:22:30.719 "num_base_bdevs": 4, 00:22:30.719 "num_base_bdevs_discovered": 4, 00:22:30.719 "num_base_bdevs_operational": 4, 00:22:30.719 "base_bdevs_list": [ 00:22:30.719 { 00:22:30.719 "name": "BaseBdev1", 00:22:30.719 "uuid": "abd059be-3ca2-55f3-909f-951d00d15b30", 00:22:30.719 "is_configured": true, 00:22:30.719 "data_offset": 2048, 00:22:30.719 "data_size": 63488 00:22:30.719 }, 00:22:30.719 { 00:22:30.719 "name": "BaseBdev2", 00:22:30.719 "uuid": "49970eb0-7416-508d-9d99-b85c8644139a", 00:22:30.719 "is_configured": true, 00:22:30.719 "data_offset": 2048, 00:22:30.719 "data_size": 63488 00:22:30.719 }, 00:22:30.719 { 00:22:30.719 "name": "BaseBdev3", 00:22:30.719 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:30.719 "is_configured": true, 00:22:30.719 "data_offset": 2048, 00:22:30.719 "data_size": 63488 00:22:30.719 }, 00:22:30.719 { 00:22:30.719 "name": "BaseBdev4", 00:22:30.719 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:30.719 "is_configured": true, 00:22:30.719 "data_offset": 2048, 00:22:30.719 "data_size": 63488 00:22:30.719 } 00:22:30.719 ] 00:22:30.719 }' 00:22:30.719 13:07:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.719 13:07:49 -- common/autotest_common.sh@10 -- # set +x 00:22:31.653 13:07:50 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:31.653 13:07:50 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:31.653 [2024-06-11 13:07:50.452095] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:31.653 13:07:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:31.653 13:07:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.653 13:07:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:31.912 13:07:50 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:31.912 13:07:50 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:31.912 13:07:50 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:31.912 13:07:50 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:32.170 [2024-06-11 13:07:50.763406] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:32.170 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:32.170 Zero copy mechanism will not be used. 00:22:32.170 Running I/O for 60 seconds... 00:22:32.170 [2024-06-11 13:07:50.872924] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:32.170 [2024-06-11 13:07:50.879440] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.170 13:07:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.429 13:07:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.429 "name": "raid_bdev1", 00:22:32.429 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:32.429 "strip_size_kb": 0, 00:22:32.429 "state": "online", 00:22:32.429 "raid_level": "raid1", 00:22:32.429 "superblock": true, 00:22:32.429 "num_base_bdevs": 4, 00:22:32.429 "num_base_bdevs_discovered": 3, 00:22:32.429 "num_base_bdevs_operational": 3, 00:22:32.429 "base_bdevs_list": [ 00:22:32.429 { 00:22:32.429 "name": null, 00:22:32.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.429 "is_configured": false, 00:22:32.429 "data_offset": 2048, 00:22:32.429 "data_size": 63488 00:22:32.429 }, 00:22:32.429 { 00:22:32.429 "name": "BaseBdev2", 00:22:32.429 "uuid": "49970eb0-7416-508d-9d99-b85c8644139a", 00:22:32.429 "is_configured": true, 00:22:32.429 "data_offset": 2048, 00:22:32.429 "data_size": 63488 00:22:32.429 }, 00:22:32.429 { 00:22:32.429 "name": "BaseBdev3", 00:22:32.429 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:32.429 "is_configured": true, 00:22:32.429 "data_offset": 2048, 00:22:32.429 "data_size": 63488 00:22:32.429 }, 00:22:32.429 { 00:22:32.429 "name": "BaseBdev4", 00:22:32.429 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:32.429 "is_configured": true, 00:22:32.429 "data_offset": 2048, 00:22:32.429 "data_size": 63488 00:22:32.429 } 00:22:32.429 ] 00:22:32.429 }' 00:22:32.429 13:07:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.429 13:07:51 -- common/autotest_common.sh@10 -- # set +x 00:22:32.996 13:07:51 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:33.254 [2024-06-11 13:07:51.965194] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:33.254 [2024-06-11 13:07:51.965624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:33.254 13:07:52 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:33.254 [2024-06-11 13:07:52.033875] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:33.254 [2024-06-11 13:07:52.036339] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:33.512 [2024-06-11 13:07:52.147716] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:33.512 [2024-06-11 13:07:52.148563] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:33.784 [2024-06-11 13:07:52.376009] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:33.784 [2024-06-11 13:07:52.376649] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:34.047 [2024-06-11 13:07:52.630408] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:34.047 [2024-06-11 13:07:52.631284] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:34.047 [2024-06-11 13:07:52.757041] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:34.047 [2024-06-11 13:07:52.758120] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:34.305 13:07:53 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.306 13:07:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.306 13:07:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:34.306 13:07:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:34.306 13:07:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.306 13:07:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.306 13:07:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.306 [2024-06-11 13:07:53.081386] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:34.306 [2024-06-11 13:07:53.083094] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:34.564 13:07:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.564 "name": "raid_bdev1", 00:22:34.564 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:34.564 "strip_size_kb": 0, 00:22:34.564 "state": "online", 00:22:34.564 "raid_level": "raid1", 00:22:34.564 "superblock": true, 00:22:34.564 "num_base_bdevs": 4, 00:22:34.564 "num_base_bdevs_discovered": 4, 00:22:34.564 "num_base_bdevs_operational": 4, 00:22:34.564 "process": { 00:22:34.564 "type": "rebuild", 00:22:34.564 "target": "spare", 00:22:34.564 "progress": { 00:22:34.564 "blocks": 14336, 00:22:34.564 "percent": 22 00:22:34.564 } 00:22:34.564 }, 00:22:34.564 "base_bdevs_list": [ 00:22:34.564 { 00:22:34.564 "name": "spare", 00:22:34.564 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:34.564 "is_configured": true, 00:22:34.564 "data_offset": 2048, 00:22:34.564 "data_size": 63488 00:22:34.564 }, 00:22:34.564 { 00:22:34.564 "name": "BaseBdev2", 00:22:34.564 "uuid": "49970eb0-7416-508d-9d99-b85c8644139a", 00:22:34.564 "is_configured": true, 00:22:34.564 "data_offset": 2048, 00:22:34.564 "data_size": 63488 00:22:34.564 }, 00:22:34.564 { 00:22:34.564 "name": "BaseBdev3", 00:22:34.564 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:34.564 "is_configured": true, 00:22:34.564 "data_offset": 2048, 00:22:34.564 "data_size": 63488 00:22:34.564 }, 00:22:34.564 { 00:22:34.564 "name": "BaseBdev4", 00:22:34.564 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:34.564 "is_configured": true, 00:22:34.564 "data_offset": 2048, 00:22:34.564 "data_size": 63488 00:22:34.564 } 00:22:34.564 ] 00:22:34.564 }' 00:22:34.564 13:07:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.564 [2024-06-11 13:07:53.285332] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:34.564 13:07:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.564 13:07:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.564 13:07:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.564 13:07:53 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:34.823 [2024-06-11 13:07:53.523292] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:34.823 [2024-06-11 13:07:53.603632] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:34.823 [2024-06-11 13:07:53.631508] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:34.823 [2024-06-11 13:07:53.631871] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:35.081 [2024-06-11 13:07:53.734785] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:35.081 [2024-06-11 13:07:53.737220] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.081 [2024-06-11 13:07:53.765433] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.081 13:07:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.339 13:07:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.339 "name": "raid_bdev1", 00:22:35.339 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:35.339 "strip_size_kb": 0, 00:22:35.339 "state": "online", 00:22:35.339 "raid_level": "raid1", 00:22:35.339 "superblock": true, 00:22:35.339 "num_base_bdevs": 4, 00:22:35.339 "num_base_bdevs_discovered": 3, 00:22:35.339 "num_base_bdevs_operational": 3, 00:22:35.339 "base_bdevs_list": [ 00:22:35.339 { 00:22:35.339 "name": null, 00:22:35.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.339 "is_configured": false, 00:22:35.339 "data_offset": 2048, 00:22:35.339 "data_size": 63488 00:22:35.339 }, 00:22:35.339 { 00:22:35.339 "name": "BaseBdev2", 00:22:35.339 "uuid": "49970eb0-7416-508d-9d99-b85c8644139a", 00:22:35.339 "is_configured": true, 00:22:35.339 "data_offset": 2048, 00:22:35.339 "data_size": 63488 00:22:35.339 }, 00:22:35.339 { 00:22:35.339 "name": "BaseBdev3", 00:22:35.339 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:35.339 "is_configured": true, 00:22:35.339 "data_offset": 2048, 00:22:35.339 "data_size": 63488 00:22:35.339 }, 00:22:35.339 { 00:22:35.339 "name": "BaseBdev4", 00:22:35.339 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:35.339 "is_configured": true, 00:22:35.339 "data_offset": 2048, 00:22:35.339 "data_size": 63488 00:22:35.339 } 00:22:35.339 ] 00:22:35.339 }' 00:22:35.339 13:07:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.339 13:07:54 -- common/autotest_common.sh@10 -- # set +x 00:22:35.930 13:07:54 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:35.931 13:07:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:35.931 13:07:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:35.931 13:07:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:35.931 13:07:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:35.931 13:07:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.931 13:07:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.189 13:07:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:36.189 "name": "raid_bdev1", 00:22:36.189 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:36.189 "strip_size_kb": 0, 00:22:36.189 "state": "online", 00:22:36.189 "raid_level": "raid1", 00:22:36.189 "superblock": true, 00:22:36.189 "num_base_bdevs": 4, 00:22:36.189 "num_base_bdevs_discovered": 3, 00:22:36.189 "num_base_bdevs_operational": 3, 00:22:36.189 "base_bdevs_list": [ 00:22:36.189 { 00:22:36.189 "name": null, 00:22:36.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.189 "is_configured": false, 00:22:36.189 "data_offset": 2048, 00:22:36.189 "data_size": 63488 00:22:36.189 }, 00:22:36.189 { 00:22:36.189 "name": "BaseBdev2", 00:22:36.189 "uuid": "49970eb0-7416-508d-9d99-b85c8644139a", 00:22:36.189 "is_configured": true, 00:22:36.189 "data_offset": 2048, 00:22:36.189 "data_size": 63488 00:22:36.189 }, 00:22:36.189 { 00:22:36.189 "name": "BaseBdev3", 00:22:36.189 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:36.189 "is_configured": true, 00:22:36.189 "data_offset": 2048, 00:22:36.189 "data_size": 63488 00:22:36.189 }, 00:22:36.189 { 00:22:36.189 "name": "BaseBdev4", 00:22:36.189 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:36.189 "is_configured": true, 00:22:36.189 "data_offset": 2048, 00:22:36.189 "data_size": 63488 00:22:36.189 } 00:22:36.189 ] 00:22:36.189 }' 00:22:36.189 13:07:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:36.189 13:07:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:36.189 13:07:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:36.446 13:07:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:36.446 13:07:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:36.446 [2024-06-11 13:07:55.280210] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:36.446 [2024-06-11 13:07:55.280666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:36.705 13:07:55 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:36.705 [2024-06-11 13:07:55.365446] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:36.705 [2024-06-11 13:07:55.367945] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:36.705 [2024-06-11 13:07:55.478312] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:36.705 [2024-06-11 13:07:55.479140] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:36.962 [2024-06-11 13:07:55.699447] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:36.962 [2024-06-11 13:07:55.700063] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:37.220 [2024-06-11 13:07:56.038530] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:37.478 [2024-06-11 13:07:56.158076] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:37.478 [2024-06-11 13:07:56.158660] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:37.736 13:07:56 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.736 13:07:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:37.736 13:07:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:37.736 13:07:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:37.736 13:07:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:37.736 13:07:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.736 13:07:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.736 [2024-06-11 13:07:56.448231] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:37.736 [2024-06-11 13:07:56.450064] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:37.993 13:07:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:37.993 "name": "raid_bdev1", 00:22:37.993 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:37.993 "strip_size_kb": 0, 00:22:37.993 "state": "online", 00:22:37.993 "raid_level": "raid1", 00:22:37.993 "superblock": true, 00:22:37.993 "num_base_bdevs": 4, 00:22:37.993 "num_base_bdevs_discovered": 4, 00:22:37.993 "num_base_bdevs_operational": 4, 00:22:37.993 "process": { 00:22:37.993 "type": "rebuild", 00:22:37.993 "target": "spare", 00:22:37.993 "progress": { 00:22:37.993 "blocks": 14336, 00:22:37.993 "percent": 22 00:22:37.993 } 00:22:37.993 }, 00:22:37.993 "base_bdevs_list": [ 00:22:37.993 { 00:22:37.993 "name": "spare", 00:22:37.993 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:37.993 "is_configured": true, 00:22:37.993 "data_offset": 2048, 00:22:37.993 "data_size": 63488 00:22:37.993 }, 00:22:37.993 { 00:22:37.993 "name": "BaseBdev2", 00:22:37.993 "uuid": "49970eb0-7416-508d-9d99-b85c8644139a", 00:22:37.993 "is_configured": true, 00:22:37.993 "data_offset": 2048, 00:22:37.993 "data_size": 63488 00:22:37.993 }, 00:22:37.993 { 00:22:37.993 "name": "BaseBdev3", 00:22:37.993 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:37.993 "is_configured": true, 00:22:37.993 "data_offset": 2048, 00:22:37.993 "data_size": 63488 00:22:37.993 }, 00:22:37.993 { 00:22:37.993 "name": "BaseBdev4", 00:22:37.993 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:37.993 "is_configured": true, 00:22:37.993 "data_offset": 2048, 00:22:37.993 "data_size": 63488 00:22:37.993 } 00:22:37.993 ] 00:22:37.993 }' 00:22:37.993 13:07:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:37.993 13:07:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:37.994 13:07:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:37.994 [2024-06-11 13:07:56.677862] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:37.994 [2024-06-11 13:07:56.678322] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:37.994 13:07:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:37.994 13:07:56 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:37.994 13:07:56 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:37.994 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:37.994 13:07:56 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:37.994 13:07:56 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:37.994 13:07:56 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:37.994 13:07:56 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:38.251 [2024-06-11 13:07:56.913362] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:38.252 [2024-06-11 13:07:56.933469] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:38.509 [2024-06-11 13:07:57.124934] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:38.509 [2024-06-11 13:07:57.133612] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:22:38.509 [2024-06-11 13:07:57.133773] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.509 13:07:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.768 [2024-06-11 13:07:57.391755] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:38.768 "name": "raid_bdev1", 00:22:38.768 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:38.768 "strip_size_kb": 0, 00:22:38.768 "state": "online", 00:22:38.768 "raid_level": "raid1", 00:22:38.768 "superblock": true, 00:22:38.768 "num_base_bdevs": 4, 00:22:38.768 "num_base_bdevs_discovered": 3, 00:22:38.768 "num_base_bdevs_operational": 3, 00:22:38.768 "process": { 00:22:38.768 "type": "rebuild", 00:22:38.768 "target": "spare", 00:22:38.768 "progress": { 00:22:38.768 "blocks": 26624, 00:22:38.768 "percent": 41 00:22:38.768 } 00:22:38.768 }, 00:22:38.768 "base_bdevs_list": [ 00:22:38.768 { 00:22:38.768 "name": "spare", 00:22:38.768 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:38.768 "is_configured": true, 00:22:38.768 "data_offset": 2048, 00:22:38.768 "data_size": 63488 00:22:38.768 }, 00:22:38.768 { 00:22:38.768 "name": null, 00:22:38.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.768 "is_configured": false, 00:22:38.768 "data_offset": 2048, 00:22:38.768 "data_size": 63488 00:22:38.768 }, 00:22:38.768 { 00:22:38.768 "name": "BaseBdev3", 00:22:38.768 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:38.768 "is_configured": true, 00:22:38.768 "data_offset": 2048, 00:22:38.768 "data_size": 63488 00:22:38.768 }, 00:22:38.768 { 00:22:38.768 "name": "BaseBdev4", 00:22:38.768 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:38.768 "is_configured": true, 00:22:38.768 "data_offset": 2048, 00:22:38.768 "data_size": 63488 00:22:38.768 } 00:22:38.768 ] 00:22:38.768 }' 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@657 -- # local timeout=549 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:38.768 13:07:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.027 13:07:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.027 13:07:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.027 [2024-06-11 13:07:57.719309] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:39.027 13:07:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:39.027 "name": "raid_bdev1", 00:22:39.027 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:39.027 "strip_size_kb": 0, 00:22:39.027 "state": "online", 00:22:39.027 "raid_level": "raid1", 00:22:39.027 "superblock": true, 00:22:39.027 "num_base_bdevs": 4, 00:22:39.027 "num_base_bdevs_discovered": 3, 00:22:39.027 "num_base_bdevs_operational": 3, 00:22:39.027 "process": { 00:22:39.028 "type": "rebuild", 00:22:39.028 "target": "spare", 00:22:39.028 "progress": { 00:22:39.028 "blocks": 32768, 00:22:39.028 "percent": 51 00:22:39.028 } 00:22:39.028 }, 00:22:39.028 "base_bdevs_list": [ 00:22:39.028 { 00:22:39.028 "name": "spare", 00:22:39.028 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:39.028 "is_configured": true, 00:22:39.028 "data_offset": 2048, 00:22:39.028 "data_size": 63488 00:22:39.028 }, 00:22:39.028 { 00:22:39.028 "name": null, 00:22:39.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.028 "is_configured": false, 00:22:39.028 "data_offset": 2048, 00:22:39.028 "data_size": 63488 00:22:39.028 }, 00:22:39.028 { 00:22:39.028 "name": "BaseBdev3", 00:22:39.028 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:39.028 "is_configured": true, 00:22:39.028 "data_offset": 2048, 00:22:39.028 "data_size": 63488 00:22:39.028 }, 00:22:39.028 { 00:22:39.028 "name": "BaseBdev4", 00:22:39.028 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:39.028 "is_configured": true, 00:22:39.028 "data_offset": 2048, 00:22:39.028 "data_size": 63488 00:22:39.028 } 00:22:39.028 ] 00:22:39.028 }' 00:22:39.028 13:07:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:39.028 13:07:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:39.028 13:07:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:39.028 [2024-06-11 13:07:57.846078] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:39.286 13:07:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:39.286 13:07:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:39.286 [2024-06-11 13:07:58.086232] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:39.853 [2024-06-11 13:07:58.404933] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:39.853 [2024-06-11 13:07:58.530032] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:40.110 13:07:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:40.110 13:07:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:40.110 13:07:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:40.110 13:07:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:40.110 13:07:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:40.110 13:07:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:40.110 13:07:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.110 13:07:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.368 13:07:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.368 "name": "raid_bdev1", 00:22:40.368 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:40.368 "strip_size_kb": 0, 00:22:40.368 "state": "online", 00:22:40.368 "raid_level": "raid1", 00:22:40.368 "superblock": true, 00:22:40.368 "num_base_bdevs": 4, 00:22:40.368 "num_base_bdevs_discovered": 3, 00:22:40.368 "num_base_bdevs_operational": 3, 00:22:40.368 "process": { 00:22:40.368 "type": "rebuild", 00:22:40.368 "target": "spare", 00:22:40.369 "progress": { 00:22:40.369 "blocks": 57344, 00:22:40.369 "percent": 90 00:22:40.369 } 00:22:40.369 }, 00:22:40.369 "base_bdevs_list": [ 00:22:40.369 { 00:22:40.369 "name": "spare", 00:22:40.369 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:40.369 "is_configured": true, 00:22:40.369 "data_offset": 2048, 00:22:40.369 "data_size": 63488 00:22:40.369 }, 00:22:40.369 { 00:22:40.369 "name": null, 00:22:40.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.369 "is_configured": false, 00:22:40.369 "data_offset": 2048, 00:22:40.369 "data_size": 63488 00:22:40.369 }, 00:22:40.369 { 00:22:40.369 "name": "BaseBdev3", 00:22:40.369 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:40.369 "is_configured": true, 00:22:40.369 "data_offset": 2048, 00:22:40.369 "data_size": 63488 00:22:40.369 }, 00:22:40.369 { 00:22:40.369 "name": "BaseBdev4", 00:22:40.369 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:40.369 "is_configured": true, 00:22:40.369 "data_offset": 2048, 00:22:40.369 "data_size": 63488 00:22:40.369 } 00:22:40.369 ] 00:22:40.369 }' 00:22:40.369 13:07:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.369 [2024-06-11 13:07:59.203165] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:40.627 13:07:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:40.627 13:07:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.627 13:07:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.627 13:07:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:40.885 [2024-06-11 13:07:59.528617] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:40.885 [2024-06-11 13:07:59.634320] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:40.885 [2024-06-11 13:07:59.636317] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.818 13:08:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:41.818 "name": "raid_bdev1", 00:22:41.818 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:41.818 "strip_size_kb": 0, 00:22:41.818 "state": "online", 00:22:41.818 "raid_level": "raid1", 00:22:41.818 "superblock": true, 00:22:41.818 "num_base_bdevs": 4, 00:22:41.818 "num_base_bdevs_discovered": 3, 00:22:41.818 "num_base_bdevs_operational": 3, 00:22:41.818 "base_bdevs_list": [ 00:22:41.818 { 00:22:41.818 "name": "spare", 00:22:41.818 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:41.818 "is_configured": true, 00:22:41.818 "data_offset": 2048, 00:22:41.818 "data_size": 63488 00:22:41.818 }, 00:22:41.818 { 00:22:41.818 "name": null, 00:22:41.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.819 "is_configured": false, 00:22:41.819 "data_offset": 2048, 00:22:41.819 "data_size": 63488 00:22:41.819 }, 00:22:41.819 { 00:22:41.819 "name": "BaseBdev3", 00:22:41.819 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:41.819 "is_configured": true, 00:22:41.819 "data_offset": 2048, 00:22:41.819 "data_size": 63488 00:22:41.819 }, 00:22:41.819 { 00:22:41.819 "name": "BaseBdev4", 00:22:41.819 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:41.819 "is_configured": true, 00:22:41.819 "data_offset": 2048, 00:22:41.819 "data_size": 63488 00:22:41.819 } 00:22:41.819 ] 00:22:41.819 }' 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@660 -- # break 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:41.819 13:08:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:42.076 13:08:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.076 13:08:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.076 13:08:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:42.076 "name": "raid_bdev1", 00:22:42.076 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:42.076 "strip_size_kb": 0, 00:22:42.076 "state": "online", 00:22:42.076 "raid_level": "raid1", 00:22:42.076 "superblock": true, 00:22:42.076 "num_base_bdevs": 4, 00:22:42.076 "num_base_bdevs_discovered": 3, 00:22:42.076 "num_base_bdevs_operational": 3, 00:22:42.076 "base_bdevs_list": [ 00:22:42.076 { 00:22:42.076 "name": "spare", 00:22:42.076 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:42.076 "is_configured": true, 00:22:42.076 "data_offset": 2048, 00:22:42.076 "data_size": 63488 00:22:42.076 }, 00:22:42.076 { 00:22:42.076 "name": null, 00:22:42.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.076 "is_configured": false, 00:22:42.076 "data_offset": 2048, 00:22:42.076 "data_size": 63488 00:22:42.076 }, 00:22:42.076 { 00:22:42.076 "name": "BaseBdev3", 00:22:42.076 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:42.076 "is_configured": true, 00:22:42.076 "data_offset": 2048, 00:22:42.076 "data_size": 63488 00:22:42.076 }, 00:22:42.076 { 00:22:42.076 "name": "BaseBdev4", 00:22:42.076 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:42.076 "is_configured": true, 00:22:42.076 "data_offset": 2048, 00:22:42.076 "data_size": 63488 00:22:42.076 } 00:22:42.076 ] 00:22:42.076 }' 00:22:42.076 13:08:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:42.334 13:08:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:42.334 13:08:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.334 13:08:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.592 13:08:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.592 "name": "raid_bdev1", 00:22:42.592 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:42.592 "strip_size_kb": 0, 00:22:42.592 "state": "online", 00:22:42.592 "raid_level": "raid1", 00:22:42.592 "superblock": true, 00:22:42.592 "num_base_bdevs": 4, 00:22:42.592 "num_base_bdevs_discovered": 3, 00:22:42.592 "num_base_bdevs_operational": 3, 00:22:42.592 "base_bdevs_list": [ 00:22:42.592 { 00:22:42.592 "name": "spare", 00:22:42.592 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:42.592 "is_configured": true, 00:22:42.592 "data_offset": 2048, 00:22:42.592 "data_size": 63488 00:22:42.592 }, 00:22:42.592 { 00:22:42.592 "name": null, 00:22:42.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.592 "is_configured": false, 00:22:42.592 "data_offset": 2048, 00:22:42.592 "data_size": 63488 00:22:42.592 }, 00:22:42.592 { 00:22:42.592 "name": "BaseBdev3", 00:22:42.592 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:42.592 "is_configured": true, 00:22:42.592 "data_offset": 2048, 00:22:42.592 "data_size": 63488 00:22:42.592 }, 00:22:42.592 { 00:22:42.592 "name": "BaseBdev4", 00:22:42.592 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:42.592 "is_configured": true, 00:22:42.592 "data_offset": 2048, 00:22:42.592 "data_size": 63488 00:22:42.592 } 00:22:42.592 ] 00:22:42.592 }' 00:22:42.592 13:08:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.592 13:08:01 -- common/autotest_common.sh@10 -- # set +x 00:22:43.157 13:08:01 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:43.415 [2024-06-11 13:08:02.083141] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.415 [2024-06-11 13:08:02.083418] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.415 00:22:43.415 Latency(us) 00:22:43.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.415 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:43.415 raid_bdev1 : 11.37 97.63 292.90 0.00 0.00 14568.32 310.92 117249.86 00:22:43.415 =================================================================================================================== 00:22:43.415 Total : 97.63 292.90 0.00 0.00 14568.32 310.92 117249.86 00:22:43.415 [2024-06-11 13:08:02.150172] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.415 [2024-06-11 13:08:02.150356] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.415 0 00:22:43.415 [2024-06-11 13:08:02.150515] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.415 [2024-06-11 13:08:02.150540] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:43.415 13:08:02 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.415 13:08:02 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:43.672 13:08:02 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:43.672 13:08:02 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:43.672 13:08:02 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@12 -- # local i 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.672 13:08:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:43.930 /dev/nbd0 00:22:43.930 13:08:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:43.930 13:08:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:43.930 13:08:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:43.930 13:08:02 -- common/autotest_common.sh@857 -- # local i 00:22:43.930 13:08:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:43.930 13:08:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:43.930 13:08:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:43.930 13:08:02 -- common/autotest_common.sh@861 -- # break 00:22:43.930 13:08:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:43.930 13:08:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:43.931 13:08:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:43.931 1+0 records in 00:22:43.931 1+0 records out 00:22:43.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417679 s, 9.8 MB/s 00:22:43.931 13:08:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:43.931 13:08:02 -- common/autotest_common.sh@874 -- # size=4096 00:22:43.931 13:08:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:43.931 13:08:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:43.931 13:08:02 -- common/autotest_common.sh@877 -- # return 0 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.931 13:08:02 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:43.931 13:08:02 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:43.931 13:08:02 -- bdev/bdev_raid.sh@678 -- # continue 00:22:43.931 13:08:02 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:43.931 13:08:02 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:43.931 13:08:02 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@12 -- # local i 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.931 13:08:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:44.189 /dev/nbd1 00:22:44.189 13:08:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:44.189 13:08:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:44.189 13:08:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:44.189 13:08:02 -- common/autotest_common.sh@857 -- # local i 00:22:44.189 13:08:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:44.189 13:08:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:44.189 13:08:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:44.189 13:08:02 -- common/autotest_common.sh@861 -- # break 00:22:44.189 13:08:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:44.189 13:08:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:44.189 13:08:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:44.189 1+0 records in 00:22:44.189 1+0 records out 00:22:44.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300712 s, 13.6 MB/s 00:22:44.189 13:08:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.189 13:08:02 -- common/autotest_common.sh@874 -- # size=4096 00:22:44.189 13:08:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.189 13:08:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:44.189 13:08:02 -- common/autotest_common.sh@877 -- # return 0 00:22:44.189 13:08:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:44.189 13:08:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:44.189 13:08:02 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:44.446 13:08:03 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:44.446 13:08:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:44.446 13:08:03 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:44.446 13:08:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:44.446 13:08:03 -- bdev/nbd_common.sh@51 -- # local i 00:22:44.446 13:08:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:44.446 13:08:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@41 -- # break 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:44.704 13:08:03 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:44.704 13:08:03 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:44.704 13:08:03 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@12 -- # local i 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:44.704 13:08:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:44.962 /dev/nbd1 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:44.962 13:08:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:44.962 13:08:03 -- common/autotest_common.sh@857 -- # local i 00:22:44.962 13:08:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:44.962 13:08:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:44.962 13:08:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:44.962 13:08:03 -- common/autotest_common.sh@861 -- # break 00:22:44.962 13:08:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:44.962 13:08:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:44.962 13:08:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:44.962 1+0 records in 00:22:44.962 1+0 records out 00:22:44.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312256 s, 13.1 MB/s 00:22:44.962 13:08:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.962 13:08:03 -- common/autotest_common.sh@874 -- # size=4096 00:22:44.962 13:08:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.962 13:08:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:44.962 13:08:03 -- common/autotest_common.sh@877 -- # return 0 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:44.962 13:08:03 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:44.962 13:08:03 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@51 -- # local i 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:44.962 13:08:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:45.220 13:08:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:45.220 13:08:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:45.220 13:08:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:45.220 13:08:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:45.220 13:08:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.220 13:08:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:45.220 13:08:04 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@41 -- # break 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@45 -- # return 0 00:22:45.479 13:08:04 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@51 -- # local i 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:45.479 13:08:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@41 -- # break 00:22:45.737 13:08:04 -- bdev/nbd_common.sh@45 -- # return 0 00:22:45.737 13:08:04 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:45.737 13:08:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:45.737 13:08:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:45.737 13:08:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:45.996 13:08:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:46.255 [2024-06-11 13:08:04.866719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:46.255 [2024-06-11 13:08:04.866808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.255 [2024-06-11 13:08:04.866855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:46.255 [2024-06-11 13:08:04.866879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.255 [2024-06-11 13:08:04.869299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.255 [2024-06-11 13:08:04.869368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:46.255 [2024-06-11 13:08:04.869489] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:46.255 [2024-06-11 13:08:04.869562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.255 BaseBdev1 00:22:46.255 13:08:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:46.255 13:08:04 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:46.255 13:08:04 -- bdev/bdev_raid.sh@696 -- # continue 00:22:46.255 13:08:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:46.255 13:08:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:46.255 13:08:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:46.255 13:08:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:46.513 [2024-06-11 13:08:05.254845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:46.513 [2024-06-11 13:08:05.254912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.513 [2024-06-11 13:08:05.254953] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:46.513 [2024-06-11 13:08:05.254975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.513 [2024-06-11 13:08:05.255381] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.514 [2024-06-11 13:08:05.255449] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:46.514 [2024-06-11 13:08:05.255534] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:46.514 [2024-06-11 13:08:05.255549] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:46.514 [2024-06-11 13:08:05.255556] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.514 [2024-06-11 13:08:05.255577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:22:46.514 [2024-06-11 13:08:05.255653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:46.514 BaseBdev3 00:22:46.514 13:08:05 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:46.514 13:08:05 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:46.514 13:08:05 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:46.772 13:08:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:47.030 [2024-06-11 13:08:05.682957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:47.030 [2024-06-11 13:08:05.683034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.030 [2024-06-11 13:08:05.683068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:47.030 [2024-06-11 13:08:05.683102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.030 [2024-06-11 13:08:05.683516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.030 [2024-06-11 13:08:05.683586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:47.030 [2024-06-11 13:08:05.683668] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:47.030 [2024-06-11 13:08:05.683693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:47.030 BaseBdev4 00:22:47.030 13:08:05 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:47.288 13:08:05 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:47.288 [2024-06-11 13:08:06.063094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:47.288 [2024-06-11 13:08:06.063165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.288 [2024-06-11 13:08:06.063196] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:22:47.288 [2024-06-11 13:08:06.063223] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.288 [2024-06-11 13:08:06.063641] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.288 [2024-06-11 13:08:06.063713] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:47.288 [2024-06-11 13:08:06.063805] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:47.288 [2024-06-11 13:08:06.063832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:47.288 spare 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.288 13:08:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.547 [2024-06-11 13:08:06.163952] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:22:47.547 [2024-06-11 13:08:06.163977] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:47.547 [2024-06-11 13:08:06.164115] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a220 00:22:47.547 [2024-06-11 13:08:06.164500] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:22:47.547 [2024-06-11 13:08:06.164524] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:22:47.547 [2024-06-11 13:08:06.164661] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.547 13:08:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:47.547 "name": "raid_bdev1", 00:22:47.547 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:47.547 "strip_size_kb": 0, 00:22:47.547 "state": "online", 00:22:47.547 "raid_level": "raid1", 00:22:47.547 "superblock": true, 00:22:47.547 "num_base_bdevs": 4, 00:22:47.547 "num_base_bdevs_discovered": 3, 00:22:47.547 "num_base_bdevs_operational": 3, 00:22:47.547 "base_bdevs_list": [ 00:22:47.547 { 00:22:47.547 "name": "spare", 00:22:47.547 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:47.547 "is_configured": true, 00:22:47.547 "data_offset": 2048, 00:22:47.547 "data_size": 63488 00:22:47.547 }, 00:22:47.547 { 00:22:47.547 "name": null, 00:22:47.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.547 "is_configured": false, 00:22:47.547 "data_offset": 2048, 00:22:47.547 "data_size": 63488 00:22:47.547 }, 00:22:47.547 { 00:22:47.547 "name": "BaseBdev3", 00:22:47.547 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:47.547 "is_configured": true, 00:22:47.547 "data_offset": 2048, 00:22:47.547 "data_size": 63488 00:22:47.547 }, 00:22:47.547 { 00:22:47.547 "name": "BaseBdev4", 00:22:47.547 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:47.547 "is_configured": true, 00:22:47.547 "data_offset": 2048, 00:22:47.547 "data_size": 63488 00:22:47.547 } 00:22:47.547 ] 00:22:47.547 }' 00:22:47.547 13:08:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:47.547 13:08:06 -- common/autotest_common.sh@10 -- # set +x 00:22:48.119 13:08:06 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:48.119 13:08:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.119 13:08:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:48.119 13:08:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:48.119 13:08:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.119 13:08:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.119 13:08:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.377 13:08:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.377 "name": "raid_bdev1", 00:22:48.377 "uuid": "545c0f96-2432-48f1-b9e5-5e4d615fe82d", 00:22:48.377 "strip_size_kb": 0, 00:22:48.377 "state": "online", 00:22:48.377 "raid_level": "raid1", 00:22:48.377 "superblock": true, 00:22:48.377 "num_base_bdevs": 4, 00:22:48.377 "num_base_bdevs_discovered": 3, 00:22:48.377 "num_base_bdevs_operational": 3, 00:22:48.377 "base_bdevs_list": [ 00:22:48.377 { 00:22:48.377 "name": "spare", 00:22:48.377 "uuid": "f66324ec-1a07-5c90-acdc-d0ef7d429175", 00:22:48.377 "is_configured": true, 00:22:48.377 "data_offset": 2048, 00:22:48.377 "data_size": 63488 00:22:48.377 }, 00:22:48.377 { 00:22:48.377 "name": null, 00:22:48.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.377 "is_configured": false, 00:22:48.377 "data_offset": 2048, 00:22:48.377 "data_size": 63488 00:22:48.377 }, 00:22:48.377 { 00:22:48.378 "name": "BaseBdev3", 00:22:48.378 "uuid": "f9fd161b-f4ec-52ea-951d-34afdd2833aa", 00:22:48.378 "is_configured": true, 00:22:48.378 "data_offset": 2048, 00:22:48.378 "data_size": 63488 00:22:48.378 }, 00:22:48.378 { 00:22:48.378 "name": "BaseBdev4", 00:22:48.378 "uuid": "3880db29-7425-56aa-9393-456002d2acfc", 00:22:48.378 "is_configured": true, 00:22:48.378 "data_offset": 2048, 00:22:48.378 "data_size": 63488 00:22:48.378 } 00:22:48.378 ] 00:22:48.378 }' 00:22:48.378 13:08:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.378 13:08:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:48.378 13:08:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.378 13:08:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:48.378 13:08:07 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.378 13:08:07 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:48.636 13:08:07 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.636 13:08:07 -- bdev/bdev_raid.sh@709 -- # killprocess 129867 00:22:48.636 13:08:07 -- common/autotest_common.sh@926 -- # '[' -z 129867 ']' 00:22:48.636 13:08:07 -- common/autotest_common.sh@930 -- # kill -0 129867 00:22:48.636 13:08:07 -- common/autotest_common.sh@931 -- # uname 00:22:48.636 13:08:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:48.636 13:08:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129867 00:22:48.636 13:08:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:48.636 killing process with pid 129867 00:22:48.636 13:08:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:48.636 13:08:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129867' 00:22:48.636 13:08:07 -- common/autotest_common.sh@945 -- # kill 129867 00:22:48.636 Received shutdown signal, test time was about 16.652230 seconds 00:22:48.636 00:22:48.636 Latency(us) 00:22:48.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.636 =================================================================================================================== 00:22:48.636 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.636 13:08:07 -- common/autotest_common.sh@950 -- # wait 129867 00:22:48.636 [2024-06-11 13:08:07.418277] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:48.636 [2024-06-11 13:08:07.418357] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.636 [2024-06-11 13:08:07.418463] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.636 [2024-06-11 13:08:07.418504] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:22:48.894 [2024-06-11 13:08:07.712631] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:50.269 00:22:50.269 real 0m23.126s 00:22:50.269 user 0m37.137s 00:22:50.269 sys 0m2.638s 00:22:50.269 13:08:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.269 13:08:08 -- common/autotest_common.sh@10 -- # set +x 00:22:50.269 ************************************ 00:22:50.269 END TEST raid_rebuild_test_sb_io 00:22:50.269 ************************************ 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:22:50.269 13:08:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:50.269 13:08:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:50.269 13:08:08 -- common/autotest_common.sh@10 -- # set +x 00:22:50.269 ************************************ 00:22:50.269 START TEST raid5f_state_function_test 00:22:50.269 ************************************ 00:22:50.269 13:08:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=130548 00:22:50.269 Process raid pid: 130548 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130548' 00:22:50.269 13:08:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130548 /var/tmp/spdk-raid.sock 00:22:50.269 13:08:08 -- common/autotest_common.sh@819 -- # '[' -z 130548 ']' 00:22:50.269 13:08:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:50.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:50.269 13:08:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:50.269 13:08:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:50.269 13:08:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:50.269 13:08:08 -- common/autotest_common.sh@10 -- # set +x 00:22:50.269 [2024-06-11 13:08:08.890206] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:50.269 [2024-06-11 13:08:08.890386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.269 [2024-06-11 13:08:09.039747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.528 [2024-06-11 13:08:09.226111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.786 [2024-06-11 13:08:09.418191] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.044 13:08:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:51.044 13:08:09 -- common/autotest_common.sh@852 -- # return 0 00:22:51.044 13:08:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:51.301 [2024-06-11 13:08:10.049520] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:51.301 [2024-06-11 13:08:10.049613] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:51.301 [2024-06-11 13:08:10.049645] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:51.301 [2024-06-11 13:08:10.049670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:51.301 [2024-06-11 13:08:10.049679] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:51.301 [2024-06-11 13:08:10.049727] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.301 13:08:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.559 13:08:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:51.559 "name": "Existed_Raid", 00:22:51.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.559 "strip_size_kb": 64, 00:22:51.559 "state": "configuring", 00:22:51.559 "raid_level": "raid5f", 00:22:51.559 "superblock": false, 00:22:51.559 "num_base_bdevs": 3, 00:22:51.559 "num_base_bdevs_discovered": 0, 00:22:51.559 "num_base_bdevs_operational": 3, 00:22:51.559 "base_bdevs_list": [ 00:22:51.559 { 00:22:51.559 "name": "BaseBdev1", 00:22:51.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.559 "is_configured": false, 00:22:51.559 "data_offset": 0, 00:22:51.559 "data_size": 0 00:22:51.559 }, 00:22:51.559 { 00:22:51.559 "name": "BaseBdev2", 00:22:51.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.559 "is_configured": false, 00:22:51.559 "data_offset": 0, 00:22:51.559 "data_size": 0 00:22:51.559 }, 00:22:51.559 { 00:22:51.559 "name": "BaseBdev3", 00:22:51.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.559 "is_configured": false, 00:22:51.559 "data_offset": 0, 00:22:51.559 "data_size": 0 00:22:51.559 } 00:22:51.559 ] 00:22:51.559 }' 00:22:51.559 13:08:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:51.559 13:08:10 -- common/autotest_common.sh@10 -- # set +x 00:22:52.125 13:08:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:52.383 [2024-06-11 13:08:11.129587] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:52.384 [2024-06-11 13:08:11.129618] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:52.384 13:08:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:52.642 [2024-06-11 13:08:11.325657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:52.642 [2024-06-11 13:08:11.325716] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:52.642 [2024-06-11 13:08:11.325730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:52.642 [2024-06-11 13:08:11.325752] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:52.642 [2024-06-11 13:08:11.325761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:52.642 [2024-06-11 13:08:11.325798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:52.642 13:08:11 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:52.900 [2024-06-11 13:08:11.559063] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:52.900 BaseBdev1 00:22:52.900 13:08:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:52.900 13:08:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:52.900 13:08:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:52.900 13:08:11 -- common/autotest_common.sh@889 -- # local i 00:22:52.900 13:08:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:52.900 13:08:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:52.900 13:08:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:53.159 13:08:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:53.159 [ 00:22:53.159 { 00:22:53.159 "name": "BaseBdev1", 00:22:53.159 "aliases": [ 00:22:53.159 "8405eaca-baa6-41e1-821b-08d9b569513b" 00:22:53.159 ], 00:22:53.159 "product_name": "Malloc disk", 00:22:53.159 "block_size": 512, 00:22:53.159 "num_blocks": 65536, 00:22:53.159 "uuid": "8405eaca-baa6-41e1-821b-08d9b569513b", 00:22:53.159 "assigned_rate_limits": { 00:22:53.159 "rw_ios_per_sec": 0, 00:22:53.159 "rw_mbytes_per_sec": 0, 00:22:53.159 "r_mbytes_per_sec": 0, 00:22:53.159 "w_mbytes_per_sec": 0 00:22:53.159 }, 00:22:53.159 "claimed": true, 00:22:53.159 "claim_type": "exclusive_write", 00:22:53.159 "zoned": false, 00:22:53.159 "supported_io_types": { 00:22:53.159 "read": true, 00:22:53.159 "write": true, 00:22:53.159 "unmap": true, 00:22:53.159 "write_zeroes": true, 00:22:53.159 "flush": true, 00:22:53.159 "reset": true, 00:22:53.159 "compare": false, 00:22:53.159 "compare_and_write": false, 00:22:53.159 "abort": true, 00:22:53.159 "nvme_admin": false, 00:22:53.159 "nvme_io": false 00:22:53.159 }, 00:22:53.159 "memory_domains": [ 00:22:53.159 { 00:22:53.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.159 "dma_device_type": 2 00:22:53.159 } 00:22:53.159 ], 00:22:53.159 "driver_specific": {} 00:22:53.159 } 00:22:53.159 ] 00:22:53.159 13:08:11 -- common/autotest_common.sh@895 -- # return 0 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.159 13:08:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.417 13:08:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.417 "name": "Existed_Raid", 00:22:53.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.417 "strip_size_kb": 64, 00:22:53.417 "state": "configuring", 00:22:53.417 "raid_level": "raid5f", 00:22:53.417 "superblock": false, 00:22:53.417 "num_base_bdevs": 3, 00:22:53.417 "num_base_bdevs_discovered": 1, 00:22:53.417 "num_base_bdevs_operational": 3, 00:22:53.417 "base_bdevs_list": [ 00:22:53.417 { 00:22:53.417 "name": "BaseBdev1", 00:22:53.417 "uuid": "8405eaca-baa6-41e1-821b-08d9b569513b", 00:22:53.417 "is_configured": true, 00:22:53.417 "data_offset": 0, 00:22:53.417 "data_size": 65536 00:22:53.417 }, 00:22:53.417 { 00:22:53.417 "name": "BaseBdev2", 00:22:53.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.417 "is_configured": false, 00:22:53.417 "data_offset": 0, 00:22:53.417 "data_size": 0 00:22:53.417 }, 00:22:53.417 { 00:22:53.417 "name": "BaseBdev3", 00:22:53.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.417 "is_configured": false, 00:22:53.417 "data_offset": 0, 00:22:53.417 "data_size": 0 00:22:53.417 } 00:22:53.417 ] 00:22:53.417 }' 00:22:53.417 13:08:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.417 13:08:12 -- common/autotest_common.sh@10 -- # set +x 00:22:53.982 13:08:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:54.240 [2024-06-11 13:08:12.995372] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:54.240 [2024-06-11 13:08:12.995785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:54.240 13:08:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:54.240 13:08:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:54.498 [2024-06-11 13:08:13.251455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:54.498 [2024-06-11 13:08:13.253660] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:54.498 [2024-06-11 13:08:13.253823] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:54.498 [2024-06-11 13:08:13.253924] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:54.498 [2024-06-11 13:08:13.253978] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.498 13:08:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.756 13:08:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.756 "name": "Existed_Raid", 00:22:54.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.756 "strip_size_kb": 64, 00:22:54.756 "state": "configuring", 00:22:54.756 "raid_level": "raid5f", 00:22:54.756 "superblock": false, 00:22:54.756 "num_base_bdevs": 3, 00:22:54.756 "num_base_bdevs_discovered": 1, 00:22:54.756 "num_base_bdevs_operational": 3, 00:22:54.756 "base_bdevs_list": [ 00:22:54.756 { 00:22:54.756 "name": "BaseBdev1", 00:22:54.756 "uuid": "8405eaca-baa6-41e1-821b-08d9b569513b", 00:22:54.756 "is_configured": true, 00:22:54.756 "data_offset": 0, 00:22:54.756 "data_size": 65536 00:22:54.756 }, 00:22:54.756 { 00:22:54.756 "name": "BaseBdev2", 00:22:54.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.756 "is_configured": false, 00:22:54.756 "data_offset": 0, 00:22:54.756 "data_size": 0 00:22:54.756 }, 00:22:54.756 { 00:22:54.756 "name": "BaseBdev3", 00:22:54.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.756 "is_configured": false, 00:22:54.756 "data_offset": 0, 00:22:54.756 "data_size": 0 00:22:54.756 } 00:22:54.756 ] 00:22:54.756 }' 00:22:54.756 13:08:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.756 13:08:13 -- common/autotest_common.sh@10 -- # set +x 00:22:55.322 13:08:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:55.580 [2024-06-11 13:08:14.326335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:55.580 BaseBdev2 00:22:55.580 13:08:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:55.580 13:08:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:22:55.580 13:08:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:55.580 13:08:14 -- common/autotest_common.sh@889 -- # local i 00:22:55.580 13:08:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:55.580 13:08:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:55.580 13:08:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:55.839 13:08:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:56.098 [ 00:22:56.098 { 00:22:56.098 "name": "BaseBdev2", 00:22:56.098 "aliases": [ 00:22:56.098 "9470e481-f8e0-4082-892b-adbdd195996a" 00:22:56.098 ], 00:22:56.098 "product_name": "Malloc disk", 00:22:56.098 "block_size": 512, 00:22:56.098 "num_blocks": 65536, 00:22:56.098 "uuid": "9470e481-f8e0-4082-892b-adbdd195996a", 00:22:56.098 "assigned_rate_limits": { 00:22:56.098 "rw_ios_per_sec": 0, 00:22:56.098 "rw_mbytes_per_sec": 0, 00:22:56.098 "r_mbytes_per_sec": 0, 00:22:56.098 "w_mbytes_per_sec": 0 00:22:56.098 }, 00:22:56.098 "claimed": true, 00:22:56.098 "claim_type": "exclusive_write", 00:22:56.098 "zoned": false, 00:22:56.098 "supported_io_types": { 00:22:56.098 "read": true, 00:22:56.098 "write": true, 00:22:56.098 "unmap": true, 00:22:56.098 "write_zeroes": true, 00:22:56.098 "flush": true, 00:22:56.098 "reset": true, 00:22:56.098 "compare": false, 00:22:56.098 "compare_and_write": false, 00:22:56.098 "abort": true, 00:22:56.098 "nvme_admin": false, 00:22:56.098 "nvme_io": false 00:22:56.098 }, 00:22:56.098 "memory_domains": [ 00:22:56.098 { 00:22:56.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.098 "dma_device_type": 2 00:22:56.098 } 00:22:56.098 ], 00:22:56.098 "driver_specific": {} 00:22:56.098 } 00:22:56.098 ] 00:22:56.098 13:08:14 -- common/autotest_common.sh@895 -- # return 0 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.098 13:08:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.356 13:08:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:56.356 "name": "Existed_Raid", 00:22:56.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.356 "strip_size_kb": 64, 00:22:56.356 "state": "configuring", 00:22:56.356 "raid_level": "raid5f", 00:22:56.356 "superblock": false, 00:22:56.356 "num_base_bdevs": 3, 00:22:56.357 "num_base_bdevs_discovered": 2, 00:22:56.357 "num_base_bdevs_operational": 3, 00:22:56.357 "base_bdevs_list": [ 00:22:56.357 { 00:22:56.357 "name": "BaseBdev1", 00:22:56.357 "uuid": "8405eaca-baa6-41e1-821b-08d9b569513b", 00:22:56.357 "is_configured": true, 00:22:56.357 "data_offset": 0, 00:22:56.357 "data_size": 65536 00:22:56.357 }, 00:22:56.357 { 00:22:56.357 "name": "BaseBdev2", 00:22:56.357 "uuid": "9470e481-f8e0-4082-892b-adbdd195996a", 00:22:56.357 "is_configured": true, 00:22:56.357 "data_offset": 0, 00:22:56.357 "data_size": 65536 00:22:56.357 }, 00:22:56.357 { 00:22:56.357 "name": "BaseBdev3", 00:22:56.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.357 "is_configured": false, 00:22:56.357 "data_offset": 0, 00:22:56.357 "data_size": 0 00:22:56.357 } 00:22:56.357 ] 00:22:56.357 }' 00:22:56.357 13:08:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:56.357 13:08:15 -- common/autotest_common.sh@10 -- # set +x 00:22:56.924 13:08:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:57.184 [2024-06-11 13:08:15.840412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:57.184 [2024-06-11 13:08:15.840777] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:22:57.184 [2024-06-11 13:08:15.840821] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:57.184 [2024-06-11 13:08:15.841043] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:22:57.184 [2024-06-11 13:08:15.845718] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:22:57.184 [2024-06-11 13:08:15.845882] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:22:57.184 [2024-06-11 13:08:15.846334] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.184 BaseBdev3 00:22:57.184 13:08:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:57.184 13:08:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:22:57.184 13:08:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:57.184 13:08:15 -- common/autotest_common.sh@889 -- # local i 00:22:57.184 13:08:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:57.184 13:08:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:57.184 13:08:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:57.442 13:08:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:57.702 [ 00:22:57.702 { 00:22:57.702 "name": "BaseBdev3", 00:22:57.702 "aliases": [ 00:22:57.702 "86eb8ecd-f988-4389-a6bb-dfa820eadbf6" 00:22:57.702 ], 00:22:57.702 "product_name": "Malloc disk", 00:22:57.702 "block_size": 512, 00:22:57.702 "num_blocks": 65536, 00:22:57.702 "uuid": "86eb8ecd-f988-4389-a6bb-dfa820eadbf6", 00:22:57.702 "assigned_rate_limits": { 00:22:57.702 "rw_ios_per_sec": 0, 00:22:57.702 "rw_mbytes_per_sec": 0, 00:22:57.702 "r_mbytes_per_sec": 0, 00:22:57.702 "w_mbytes_per_sec": 0 00:22:57.702 }, 00:22:57.702 "claimed": true, 00:22:57.702 "claim_type": "exclusive_write", 00:22:57.702 "zoned": false, 00:22:57.702 "supported_io_types": { 00:22:57.702 "read": true, 00:22:57.702 "write": true, 00:22:57.702 "unmap": true, 00:22:57.702 "write_zeroes": true, 00:22:57.702 "flush": true, 00:22:57.702 "reset": true, 00:22:57.702 "compare": false, 00:22:57.702 "compare_and_write": false, 00:22:57.702 "abort": true, 00:22:57.702 "nvme_admin": false, 00:22:57.702 "nvme_io": false 00:22:57.702 }, 00:22:57.702 "memory_domains": [ 00:22:57.702 { 00:22:57.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.702 "dma_device_type": 2 00:22:57.702 } 00:22:57.702 ], 00:22:57.702 "driver_specific": {} 00:22:57.702 } 00:22:57.702 ] 00:22:57.702 13:08:16 -- common/autotest_common.sh@895 -- # return 0 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.702 13:08:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.961 13:08:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.961 "name": "Existed_Raid", 00:22:57.961 "uuid": "920ab16a-004b-4e33-9ec8-8cf43e9d9bd3", 00:22:57.961 "strip_size_kb": 64, 00:22:57.961 "state": "online", 00:22:57.961 "raid_level": "raid5f", 00:22:57.961 "superblock": false, 00:22:57.961 "num_base_bdevs": 3, 00:22:57.961 "num_base_bdevs_discovered": 3, 00:22:57.961 "num_base_bdevs_operational": 3, 00:22:57.961 "base_bdevs_list": [ 00:22:57.961 { 00:22:57.961 "name": "BaseBdev1", 00:22:57.961 "uuid": "8405eaca-baa6-41e1-821b-08d9b569513b", 00:22:57.961 "is_configured": true, 00:22:57.961 "data_offset": 0, 00:22:57.961 "data_size": 65536 00:22:57.961 }, 00:22:57.961 { 00:22:57.961 "name": "BaseBdev2", 00:22:57.961 "uuid": "9470e481-f8e0-4082-892b-adbdd195996a", 00:22:57.961 "is_configured": true, 00:22:57.961 "data_offset": 0, 00:22:57.961 "data_size": 65536 00:22:57.961 }, 00:22:57.961 { 00:22:57.961 "name": "BaseBdev3", 00:22:57.961 "uuid": "86eb8ecd-f988-4389-a6bb-dfa820eadbf6", 00:22:57.961 "is_configured": true, 00:22:57.961 "data_offset": 0, 00:22:57.961 "data_size": 65536 00:22:57.961 } 00:22:57.961 ] 00:22:57.961 }' 00:22:57.961 13:08:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.961 13:08:16 -- common/autotest_common.sh@10 -- # set +x 00:22:58.528 13:08:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:58.787 [2024-06-11 13:08:17.383961] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.787 13:08:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.045 13:08:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:59.045 "name": "Existed_Raid", 00:22:59.045 "uuid": "920ab16a-004b-4e33-9ec8-8cf43e9d9bd3", 00:22:59.045 "strip_size_kb": 64, 00:22:59.045 "state": "online", 00:22:59.045 "raid_level": "raid5f", 00:22:59.045 "superblock": false, 00:22:59.045 "num_base_bdevs": 3, 00:22:59.045 "num_base_bdevs_discovered": 2, 00:22:59.046 "num_base_bdevs_operational": 2, 00:22:59.046 "base_bdevs_list": [ 00:22:59.046 { 00:22:59.046 "name": null, 00:22:59.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.046 "is_configured": false, 00:22:59.046 "data_offset": 0, 00:22:59.046 "data_size": 65536 00:22:59.046 }, 00:22:59.046 { 00:22:59.046 "name": "BaseBdev2", 00:22:59.046 "uuid": "9470e481-f8e0-4082-892b-adbdd195996a", 00:22:59.046 "is_configured": true, 00:22:59.046 "data_offset": 0, 00:22:59.046 "data_size": 65536 00:22:59.046 }, 00:22:59.046 { 00:22:59.046 "name": "BaseBdev3", 00:22:59.046 "uuid": "86eb8ecd-f988-4389-a6bb-dfa820eadbf6", 00:22:59.046 "is_configured": true, 00:22:59.046 "data_offset": 0, 00:22:59.046 "data_size": 65536 00:22:59.046 } 00:22:59.046 ] 00:22:59.046 }' 00:22:59.046 13:08:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:59.046 13:08:17 -- common/autotest_common.sh@10 -- # set +x 00:22:59.612 13:08:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:59.612 13:08:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:59.612 13:08:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:59.612 13:08:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.871 13:08:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:59.871 13:08:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:59.871 13:08:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:00.135 [2024-06-11 13:08:18.822344] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:00.135 [2024-06-11 13:08:18.822515] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:00.135 [2024-06-11 13:08:18.822710] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.135 13:08:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:00.135 13:08:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:00.135 13:08:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.135 13:08:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:00.406 13:08:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:00.406 13:08:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:00.406 13:08:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:00.665 [2024-06-11 13:08:19.396439] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:00.665 [2024-06-11 13:08:19.396679] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:00.665 13:08:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:00.665 13:08:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:00.665 13:08:19 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.665 13:08:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:00.926 13:08:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:00.926 13:08:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:00.926 13:08:19 -- bdev/bdev_raid.sh@287 -- # killprocess 130548 00:23:00.926 13:08:19 -- common/autotest_common.sh@926 -- # '[' -z 130548 ']' 00:23:00.926 13:08:19 -- common/autotest_common.sh@930 -- # kill -0 130548 00:23:00.926 13:08:19 -- common/autotest_common.sh@931 -- # uname 00:23:00.926 13:08:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:00.926 13:08:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130548 00:23:00.926 killing process with pid 130548 00:23:00.926 13:08:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:00.926 13:08:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:00.926 13:08:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130548' 00:23:00.926 13:08:19 -- common/autotest_common.sh@945 -- # kill 130548 00:23:00.926 13:08:19 -- common/autotest_common.sh@950 -- # wait 130548 00:23:00.926 [2024-06-11 13:08:19.687127] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:00.926 [2024-06-11 13:08:19.687275] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:01.861 ************************************ 00:23:01.861 END TEST raid5f_state_function_test 00:23:01.861 ************************************ 00:23:01.861 13:08:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:01.861 00:23:01.861 real 0m11.801s 00:23:01.861 user 0m20.978s 00:23:01.861 sys 0m1.319s 00:23:01.861 13:08:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.861 13:08:20 -- common/autotest_common.sh@10 -- # set +x 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:23:01.862 13:08:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:01.862 13:08:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:01.862 13:08:20 -- common/autotest_common.sh@10 -- # set +x 00:23:01.862 ************************************ 00:23:01.862 START TEST raid5f_state_function_test_sb 00:23:01.862 ************************************ 00:23:01.862 13:08:20 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:01.862 13:08:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:02.120 Process raid pid: 130938 00:23:02.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=130938 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130938' 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130938 /var/tmp/spdk-raid.sock 00:23:02.120 13:08:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:02.120 13:08:20 -- common/autotest_common.sh@819 -- # '[' -z 130938 ']' 00:23:02.120 13:08:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:02.120 13:08:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:02.120 13:08:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:02.120 13:08:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:02.120 13:08:20 -- common/autotest_common.sh@10 -- # set +x 00:23:02.120 [2024-06-11 13:08:20.760888] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:02.120 [2024-06-11 13:08:20.761217] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.121 [2024-06-11 13:08:20.931317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.379 [2024-06-11 13:08:21.179595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.638 [2024-06-11 13:08:21.377152] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:02.896 13:08:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:02.896 13:08:21 -- common/autotest_common.sh@852 -- # return 0 00:23:02.896 13:08:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:03.155 [2024-06-11 13:08:21.844453] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:03.155 [2024-06-11 13:08:21.844816] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:03.155 [2024-06-11 13:08:21.844933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:03.155 [2024-06-11 13:08:21.844999] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:03.155 [2024-06-11 13:08:21.845269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:03.155 [2024-06-11 13:08:21.845378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.155 13:08:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.430 13:08:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.430 "name": "Existed_Raid", 00:23:03.430 "uuid": "977c6b76-682d-48b6-ab5e-5ca8737bf0e2", 00:23:03.430 "strip_size_kb": 64, 00:23:03.430 "state": "configuring", 00:23:03.430 "raid_level": "raid5f", 00:23:03.430 "superblock": true, 00:23:03.430 "num_base_bdevs": 3, 00:23:03.430 "num_base_bdevs_discovered": 0, 00:23:03.430 "num_base_bdevs_operational": 3, 00:23:03.430 "base_bdevs_list": [ 00:23:03.430 { 00:23:03.430 "name": "BaseBdev1", 00:23:03.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.430 "is_configured": false, 00:23:03.430 "data_offset": 0, 00:23:03.430 "data_size": 0 00:23:03.430 }, 00:23:03.430 { 00:23:03.430 "name": "BaseBdev2", 00:23:03.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.430 "is_configured": false, 00:23:03.430 "data_offset": 0, 00:23:03.430 "data_size": 0 00:23:03.430 }, 00:23:03.430 { 00:23:03.430 "name": "BaseBdev3", 00:23:03.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.430 "is_configured": false, 00:23:03.430 "data_offset": 0, 00:23:03.430 "data_size": 0 00:23:03.430 } 00:23:03.430 ] 00:23:03.430 }' 00:23:03.430 13:08:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.430 13:08:22 -- common/autotest_common.sh@10 -- # set +x 00:23:03.997 13:08:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:04.255 [2024-06-11 13:08:22.912514] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:04.255 [2024-06-11 13:08:22.912796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:04.255 13:08:22 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:04.514 [2024-06-11 13:08:23.108655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:04.514 [2024-06-11 13:08:23.108893] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:04.514 [2024-06-11 13:08:23.109014] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:04.514 [2024-06-11 13:08:23.109070] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:04.514 [2024-06-11 13:08:23.109274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:04.514 [2024-06-11 13:08:23.109346] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:04.514 13:08:23 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:04.772 [2024-06-11 13:08:23.368600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.772 BaseBdev1 00:23:04.772 13:08:23 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:04.772 13:08:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:04.772 13:08:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:04.772 13:08:23 -- common/autotest_common.sh@889 -- # local i 00:23:04.772 13:08:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:04.772 13:08:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:04.772 13:08:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.772 13:08:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:05.030 [ 00:23:05.030 { 00:23:05.030 "name": "BaseBdev1", 00:23:05.030 "aliases": [ 00:23:05.030 "aac775de-74d8-41ba-b03e-2af655ddcecd" 00:23:05.030 ], 00:23:05.030 "product_name": "Malloc disk", 00:23:05.030 "block_size": 512, 00:23:05.030 "num_blocks": 65536, 00:23:05.030 "uuid": "aac775de-74d8-41ba-b03e-2af655ddcecd", 00:23:05.030 "assigned_rate_limits": { 00:23:05.030 "rw_ios_per_sec": 0, 00:23:05.030 "rw_mbytes_per_sec": 0, 00:23:05.030 "r_mbytes_per_sec": 0, 00:23:05.030 "w_mbytes_per_sec": 0 00:23:05.030 }, 00:23:05.030 "claimed": true, 00:23:05.030 "claim_type": "exclusive_write", 00:23:05.030 "zoned": false, 00:23:05.030 "supported_io_types": { 00:23:05.030 "read": true, 00:23:05.030 "write": true, 00:23:05.030 "unmap": true, 00:23:05.030 "write_zeroes": true, 00:23:05.030 "flush": true, 00:23:05.030 "reset": true, 00:23:05.030 "compare": false, 00:23:05.030 "compare_and_write": false, 00:23:05.030 "abort": true, 00:23:05.030 "nvme_admin": false, 00:23:05.030 "nvme_io": false 00:23:05.030 }, 00:23:05.030 "memory_domains": [ 00:23:05.030 { 00:23:05.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.030 "dma_device_type": 2 00:23:05.030 } 00:23:05.030 ], 00:23:05.030 "driver_specific": {} 00:23:05.030 } 00:23:05.030 ] 00:23:05.030 13:08:23 -- common/autotest_common.sh@895 -- # return 0 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.030 13:08:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.288 13:08:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:05.288 "name": "Existed_Raid", 00:23:05.288 "uuid": "5878dfcc-b127-48a1-a464-096ccd49bf4d", 00:23:05.288 "strip_size_kb": 64, 00:23:05.288 "state": "configuring", 00:23:05.288 "raid_level": "raid5f", 00:23:05.288 "superblock": true, 00:23:05.288 "num_base_bdevs": 3, 00:23:05.288 "num_base_bdevs_discovered": 1, 00:23:05.288 "num_base_bdevs_operational": 3, 00:23:05.288 "base_bdevs_list": [ 00:23:05.288 { 00:23:05.288 "name": "BaseBdev1", 00:23:05.288 "uuid": "aac775de-74d8-41ba-b03e-2af655ddcecd", 00:23:05.288 "is_configured": true, 00:23:05.288 "data_offset": 2048, 00:23:05.288 "data_size": 63488 00:23:05.288 }, 00:23:05.288 { 00:23:05.288 "name": "BaseBdev2", 00:23:05.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.288 "is_configured": false, 00:23:05.288 "data_offset": 0, 00:23:05.288 "data_size": 0 00:23:05.288 }, 00:23:05.288 { 00:23:05.288 "name": "BaseBdev3", 00:23:05.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.288 "is_configured": false, 00:23:05.288 "data_offset": 0, 00:23:05.288 "data_size": 0 00:23:05.288 } 00:23:05.288 ] 00:23:05.288 }' 00:23:05.288 13:08:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:05.288 13:08:24 -- common/autotest_common.sh@10 -- # set +x 00:23:05.853 13:08:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:06.111 [2024-06-11 13:08:24.837025] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:06.111 [2024-06-11 13:08:24.837260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:06.111 13:08:24 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:06.111 13:08:24 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:06.369 13:08:25 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:06.628 BaseBdev1 00:23:06.628 13:08:25 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:06.628 13:08:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:06.628 13:08:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:06.628 13:08:25 -- common/autotest_common.sh@889 -- # local i 00:23:06.628 13:08:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:06.628 13:08:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:06.628 13:08:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:06.886 13:08:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:07.144 [ 00:23:07.144 { 00:23:07.144 "name": "BaseBdev1", 00:23:07.144 "aliases": [ 00:23:07.144 "e72a3f20-1024-4dd0-b2f0-ddd3a0dd3622" 00:23:07.144 ], 00:23:07.144 "product_name": "Malloc disk", 00:23:07.144 "block_size": 512, 00:23:07.144 "num_blocks": 65536, 00:23:07.144 "uuid": "e72a3f20-1024-4dd0-b2f0-ddd3a0dd3622", 00:23:07.144 "assigned_rate_limits": { 00:23:07.144 "rw_ios_per_sec": 0, 00:23:07.144 "rw_mbytes_per_sec": 0, 00:23:07.144 "r_mbytes_per_sec": 0, 00:23:07.144 "w_mbytes_per_sec": 0 00:23:07.144 }, 00:23:07.144 "claimed": false, 00:23:07.144 "zoned": false, 00:23:07.144 "supported_io_types": { 00:23:07.144 "read": true, 00:23:07.144 "write": true, 00:23:07.144 "unmap": true, 00:23:07.144 "write_zeroes": true, 00:23:07.144 "flush": true, 00:23:07.144 "reset": true, 00:23:07.144 "compare": false, 00:23:07.144 "compare_and_write": false, 00:23:07.144 "abort": true, 00:23:07.144 "nvme_admin": false, 00:23:07.144 "nvme_io": false 00:23:07.144 }, 00:23:07.144 "memory_domains": [ 00:23:07.144 { 00:23:07.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.144 "dma_device_type": 2 00:23:07.144 } 00:23:07.144 ], 00:23:07.144 "driver_specific": {} 00:23:07.144 } 00:23:07.144 ] 00:23:07.144 13:08:25 -- common/autotest_common.sh@895 -- # return 0 00:23:07.145 13:08:25 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:07.402 [2024-06-11 13:08:26.015828] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.402 [2024-06-11 13:08:26.017773] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:07.402 [2024-06-11 13:08:26.017833] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:07.402 [2024-06-11 13:08:26.017860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:07.402 [2024-06-11 13:08:26.017884] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.402 13:08:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.660 13:08:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.660 "name": "Existed_Raid", 00:23:07.660 "uuid": "ac047c00-3fbd-49bc-a224-8876d655edc9", 00:23:07.660 "strip_size_kb": 64, 00:23:07.660 "state": "configuring", 00:23:07.660 "raid_level": "raid5f", 00:23:07.660 "superblock": true, 00:23:07.660 "num_base_bdevs": 3, 00:23:07.660 "num_base_bdevs_discovered": 1, 00:23:07.660 "num_base_bdevs_operational": 3, 00:23:07.660 "base_bdevs_list": [ 00:23:07.660 { 00:23:07.660 "name": "BaseBdev1", 00:23:07.660 "uuid": "e72a3f20-1024-4dd0-b2f0-ddd3a0dd3622", 00:23:07.660 "is_configured": true, 00:23:07.660 "data_offset": 2048, 00:23:07.660 "data_size": 63488 00:23:07.660 }, 00:23:07.660 { 00:23:07.660 "name": "BaseBdev2", 00:23:07.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.660 "is_configured": false, 00:23:07.660 "data_offset": 0, 00:23:07.660 "data_size": 0 00:23:07.660 }, 00:23:07.660 { 00:23:07.660 "name": "BaseBdev3", 00:23:07.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.660 "is_configured": false, 00:23:07.660 "data_offset": 0, 00:23:07.660 "data_size": 0 00:23:07.660 } 00:23:07.660 ] 00:23:07.660 }' 00:23:07.660 13:08:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.660 13:08:26 -- common/autotest_common.sh@10 -- # set +x 00:23:08.226 13:08:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:08.485 [2024-06-11 13:08:27.076966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.485 BaseBdev2 00:23:08.485 13:08:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:08.485 13:08:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:08.485 13:08:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:08.485 13:08:27 -- common/autotest_common.sh@889 -- # local i 00:23:08.485 13:08:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:08.485 13:08:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:08.485 13:08:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:08.744 13:08:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:08.744 [ 00:23:08.744 { 00:23:08.744 "name": "BaseBdev2", 00:23:08.744 "aliases": [ 00:23:08.744 "8cf66df1-3fe4-4ad5-98b7-b4683880dc2c" 00:23:08.744 ], 00:23:08.744 "product_name": "Malloc disk", 00:23:08.744 "block_size": 512, 00:23:08.744 "num_blocks": 65536, 00:23:08.744 "uuid": "8cf66df1-3fe4-4ad5-98b7-b4683880dc2c", 00:23:08.744 "assigned_rate_limits": { 00:23:08.744 "rw_ios_per_sec": 0, 00:23:08.744 "rw_mbytes_per_sec": 0, 00:23:08.744 "r_mbytes_per_sec": 0, 00:23:08.744 "w_mbytes_per_sec": 0 00:23:08.744 }, 00:23:08.744 "claimed": true, 00:23:08.744 "claim_type": "exclusive_write", 00:23:08.744 "zoned": false, 00:23:08.744 "supported_io_types": { 00:23:08.744 "read": true, 00:23:08.744 "write": true, 00:23:08.744 "unmap": true, 00:23:08.744 "write_zeroes": true, 00:23:08.744 "flush": true, 00:23:08.744 "reset": true, 00:23:08.744 "compare": false, 00:23:08.744 "compare_and_write": false, 00:23:08.744 "abort": true, 00:23:08.744 "nvme_admin": false, 00:23:08.744 "nvme_io": false 00:23:08.744 }, 00:23:08.744 "memory_domains": [ 00:23:08.744 { 00:23:08.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.744 "dma_device_type": 2 00:23:08.744 } 00:23:08.744 ], 00:23:08.744 "driver_specific": {} 00:23:08.744 } 00:23:08.744 ] 00:23:08.744 13:08:27 -- common/autotest_common.sh@895 -- # return 0 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.744 13:08:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.002 13:08:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.002 "name": "Existed_Raid", 00:23:09.002 "uuid": "ac047c00-3fbd-49bc-a224-8876d655edc9", 00:23:09.002 "strip_size_kb": 64, 00:23:09.002 "state": "configuring", 00:23:09.002 "raid_level": "raid5f", 00:23:09.002 "superblock": true, 00:23:09.002 "num_base_bdevs": 3, 00:23:09.002 "num_base_bdevs_discovered": 2, 00:23:09.002 "num_base_bdevs_operational": 3, 00:23:09.002 "base_bdevs_list": [ 00:23:09.002 { 00:23:09.002 "name": "BaseBdev1", 00:23:09.002 "uuid": "e72a3f20-1024-4dd0-b2f0-ddd3a0dd3622", 00:23:09.002 "is_configured": true, 00:23:09.002 "data_offset": 2048, 00:23:09.002 "data_size": 63488 00:23:09.002 }, 00:23:09.002 { 00:23:09.002 "name": "BaseBdev2", 00:23:09.002 "uuid": "8cf66df1-3fe4-4ad5-98b7-b4683880dc2c", 00:23:09.002 "is_configured": true, 00:23:09.002 "data_offset": 2048, 00:23:09.002 "data_size": 63488 00:23:09.002 }, 00:23:09.002 { 00:23:09.002 "name": "BaseBdev3", 00:23:09.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.002 "is_configured": false, 00:23:09.002 "data_offset": 0, 00:23:09.002 "data_size": 0 00:23:09.002 } 00:23:09.002 ] 00:23:09.002 }' 00:23:09.002 13:08:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.002 13:08:27 -- common/autotest_common.sh@10 -- # set +x 00:23:09.936 13:08:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:09.936 [2024-06-11 13:08:28.733557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:09.936 [2024-06-11 13:08:28.733847] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:09.936 [2024-06-11 13:08:28.733863] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:09.936 BaseBdev3 00:23:09.936 [2024-06-11 13:08:28.734043] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:09.936 [2024-06-11 13:08:28.738794] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:09.936 [2024-06-11 13:08:28.738820] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:23:09.936 [2024-06-11 13:08:28.739006] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.936 13:08:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:09.936 13:08:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:09.937 13:08:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:09.937 13:08:28 -- common/autotest_common.sh@889 -- # local i 00:23:09.937 13:08:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:09.937 13:08:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:09.937 13:08:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:10.194 13:08:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:10.452 [ 00:23:10.452 { 00:23:10.452 "name": "BaseBdev3", 00:23:10.452 "aliases": [ 00:23:10.452 "94ab0969-5059-45d0-9ced-76f777aa18db" 00:23:10.452 ], 00:23:10.452 "product_name": "Malloc disk", 00:23:10.452 "block_size": 512, 00:23:10.452 "num_blocks": 65536, 00:23:10.452 "uuid": "94ab0969-5059-45d0-9ced-76f777aa18db", 00:23:10.452 "assigned_rate_limits": { 00:23:10.452 "rw_ios_per_sec": 0, 00:23:10.452 "rw_mbytes_per_sec": 0, 00:23:10.452 "r_mbytes_per_sec": 0, 00:23:10.452 "w_mbytes_per_sec": 0 00:23:10.452 }, 00:23:10.452 "claimed": true, 00:23:10.452 "claim_type": "exclusive_write", 00:23:10.452 "zoned": false, 00:23:10.452 "supported_io_types": { 00:23:10.452 "read": true, 00:23:10.452 "write": true, 00:23:10.452 "unmap": true, 00:23:10.452 "write_zeroes": true, 00:23:10.452 "flush": true, 00:23:10.452 "reset": true, 00:23:10.452 "compare": false, 00:23:10.452 "compare_and_write": false, 00:23:10.452 "abort": true, 00:23:10.452 "nvme_admin": false, 00:23:10.452 "nvme_io": false 00:23:10.452 }, 00:23:10.452 "memory_domains": [ 00:23:10.452 { 00:23:10.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.452 "dma_device_type": 2 00:23:10.452 } 00:23:10.452 ], 00:23:10.452 "driver_specific": {} 00:23:10.452 } 00:23:10.452 ] 00:23:10.452 13:08:29 -- common/autotest_common.sh@895 -- # return 0 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.452 13:08:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.710 13:08:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.710 "name": "Existed_Raid", 00:23:10.710 "uuid": "ac047c00-3fbd-49bc-a224-8876d655edc9", 00:23:10.710 "strip_size_kb": 64, 00:23:10.710 "state": "online", 00:23:10.710 "raid_level": "raid5f", 00:23:10.710 "superblock": true, 00:23:10.710 "num_base_bdevs": 3, 00:23:10.710 "num_base_bdevs_discovered": 3, 00:23:10.710 "num_base_bdevs_operational": 3, 00:23:10.710 "base_bdevs_list": [ 00:23:10.710 { 00:23:10.710 "name": "BaseBdev1", 00:23:10.710 "uuid": "e72a3f20-1024-4dd0-b2f0-ddd3a0dd3622", 00:23:10.710 "is_configured": true, 00:23:10.710 "data_offset": 2048, 00:23:10.710 "data_size": 63488 00:23:10.710 }, 00:23:10.710 { 00:23:10.710 "name": "BaseBdev2", 00:23:10.710 "uuid": "8cf66df1-3fe4-4ad5-98b7-b4683880dc2c", 00:23:10.710 "is_configured": true, 00:23:10.710 "data_offset": 2048, 00:23:10.710 "data_size": 63488 00:23:10.710 }, 00:23:10.710 { 00:23:10.710 "name": "BaseBdev3", 00:23:10.710 "uuid": "94ab0969-5059-45d0-9ced-76f777aa18db", 00:23:10.710 "is_configured": true, 00:23:10.710 "data_offset": 2048, 00:23:10.710 "data_size": 63488 00:23:10.710 } 00:23:10.710 ] 00:23:10.710 }' 00:23:10.710 13:08:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.710 13:08:29 -- common/autotest_common.sh@10 -- # set +x 00:23:11.276 13:08:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:11.534 [2024-06-11 13:08:30.164193] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.534 13:08:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.793 13:08:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.793 "name": "Existed_Raid", 00:23:11.793 "uuid": "ac047c00-3fbd-49bc-a224-8876d655edc9", 00:23:11.793 "strip_size_kb": 64, 00:23:11.793 "state": "online", 00:23:11.793 "raid_level": "raid5f", 00:23:11.793 "superblock": true, 00:23:11.793 "num_base_bdevs": 3, 00:23:11.793 "num_base_bdevs_discovered": 2, 00:23:11.793 "num_base_bdevs_operational": 2, 00:23:11.793 "base_bdevs_list": [ 00:23:11.793 { 00:23:11.793 "name": null, 00:23:11.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.793 "is_configured": false, 00:23:11.793 "data_offset": 2048, 00:23:11.793 "data_size": 63488 00:23:11.793 }, 00:23:11.793 { 00:23:11.793 "name": "BaseBdev2", 00:23:11.793 "uuid": "8cf66df1-3fe4-4ad5-98b7-b4683880dc2c", 00:23:11.793 "is_configured": true, 00:23:11.793 "data_offset": 2048, 00:23:11.793 "data_size": 63488 00:23:11.793 }, 00:23:11.793 { 00:23:11.793 "name": "BaseBdev3", 00:23:11.793 "uuid": "94ab0969-5059-45d0-9ced-76f777aa18db", 00:23:11.793 "is_configured": true, 00:23:11.793 "data_offset": 2048, 00:23:11.793 "data_size": 63488 00:23:11.793 } 00:23:11.793 ] 00:23:11.793 }' 00:23:11.793 13:08:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.793 13:08:30 -- common/autotest_common.sh@10 -- # set +x 00:23:12.361 13:08:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:12.361 13:08:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:12.361 13:08:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.361 13:08:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:12.636 13:08:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:12.636 13:08:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.636 13:08:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:12.908 [2024-06-11 13:08:31.559389] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:12.908 [2024-06-11 13:08:31.559423] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:12.908 [2024-06-11 13:08:31.559491] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:12.908 13:08:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:12.908 13:08:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:12.908 13:08:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.908 13:08:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:13.166 13:08:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:13.166 13:08:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:13.166 13:08:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:13.423 [2024-06-11 13:08:32.016554] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:13.423 [2024-06-11 13:08:32.016622] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:23:13.423 13:08:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:13.423 13:08:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:13.423 13:08:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.423 13:08:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:13.680 13:08:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:13.680 13:08:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:13.680 13:08:32 -- bdev/bdev_raid.sh@287 -- # killprocess 130938 00:23:13.680 13:08:32 -- common/autotest_common.sh@926 -- # '[' -z 130938 ']' 00:23:13.680 13:08:32 -- common/autotest_common.sh@930 -- # kill -0 130938 00:23:13.680 13:08:32 -- common/autotest_common.sh@931 -- # uname 00:23:13.680 13:08:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:13.680 13:08:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130938 00:23:13.680 13:08:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:13.680 13:08:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:13.680 13:08:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130938' 00:23:13.680 killing process with pid 130938 00:23:13.680 13:08:32 -- common/autotest_common.sh@945 -- # kill 130938 00:23:13.680 [2024-06-11 13:08:32.313062] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.680 13:08:32 -- common/autotest_common.sh@950 -- # wait 130938 00:23:13.680 [2024-06-11 13:08:32.313193] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:14.616 00:23:14.616 real 0m12.540s 00:23:14.616 user 0m22.328s 00:23:14.616 sys 0m1.476s 00:23:14.616 13:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.616 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:23:14.616 ************************************ 00:23:14.616 END TEST raid5f_state_function_test_sb 00:23:14.616 ************************************ 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:14.616 13:08:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:14.616 13:08:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:14.616 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:23:14.616 ************************************ 00:23:14.616 START TEST raid5f_superblock_test 00:23:14.616 ************************************ 00:23:14.616 13:08:33 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@357 -- # raid_pid=131346 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:14.616 13:08:33 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131346 /var/tmp/spdk-raid.sock 00:23:14.616 13:08:33 -- common/autotest_common.sh@819 -- # '[' -z 131346 ']' 00:23:14.616 13:08:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:14.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:14.616 13:08:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:14.616 13:08:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:14.616 13:08:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:14.616 13:08:33 -- common/autotest_common.sh@10 -- # set +x 00:23:14.616 [2024-06-11 13:08:33.365837] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:14.616 [2024-06-11 13:08:33.366082] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131346 ] 00:23:14.874 [2024-06-11 13:08:33.541192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.133 [2024-06-11 13:08:33.759043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.133 [2024-06-11 13:08:33.948295] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.391 13:08:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:15.391 13:08:34 -- common/autotest_common.sh@852 -- # return 0 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:15.391 13:08:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:15.958 malloc1 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:15.958 [2024-06-11 13:08:34.698838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:15.958 [2024-06-11 13:08:34.698933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.958 [2024-06-11 13:08:34.698963] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:15.958 [2024-06-11 13:08:34.699019] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.958 [2024-06-11 13:08:34.701198] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.958 [2024-06-11 13:08:34.701241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:15.958 pt1 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:15.958 13:08:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:16.217 malloc2 00:23:16.217 13:08:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:16.475 [2024-06-11 13:08:35.121541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:16.475 [2024-06-11 13:08:35.121610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.475 [2024-06-11 13:08:35.121651] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:16.475 [2024-06-11 13:08:35.121701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.475 [2024-06-11 13:08:35.123787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.475 [2024-06-11 13:08:35.123829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:16.475 pt2 00:23:16.475 13:08:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:16.476 13:08:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:16.476 13:08:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:16.476 13:08:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:16.476 13:08:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:16.476 13:08:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:16.476 13:08:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:16.476 13:08:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:16.476 13:08:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:16.734 malloc3 00:23:16.735 13:08:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:16.735 [2024-06-11 13:08:35.523452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:16.735 [2024-06-11 13:08:35.523539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.735 [2024-06-11 13:08:35.523576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:16.735 [2024-06-11 13:08:35.523619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.735 [2024-06-11 13:08:35.525909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.735 [2024-06-11 13:08:35.525958] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:16.735 pt3 00:23:16.735 13:08:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:16.735 13:08:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:16.735 13:08:35 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:16.993 [2024-06-11 13:08:35.715490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:16.993 [2024-06-11 13:08:35.717362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:16.993 [2024-06-11 13:08:35.717440] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:16.993 [2024-06-11 13:08:35.717627] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:16.993 [2024-06-11 13:08:35.717639] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:16.993 [2024-06-11 13:08:35.717754] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:16.993 [2024-06-11 13:08:35.721922] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:16.994 [2024-06-11 13:08:35.721945] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:16.994 [2024-06-11 13:08:35.722095] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.994 13:08:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.252 13:08:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.252 "name": "raid_bdev1", 00:23:17.252 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:17.252 "strip_size_kb": 64, 00:23:17.252 "state": "online", 00:23:17.252 "raid_level": "raid5f", 00:23:17.252 "superblock": true, 00:23:17.252 "num_base_bdevs": 3, 00:23:17.252 "num_base_bdevs_discovered": 3, 00:23:17.252 "num_base_bdevs_operational": 3, 00:23:17.252 "base_bdevs_list": [ 00:23:17.252 { 00:23:17.252 "name": "pt1", 00:23:17.252 "uuid": "a60c69a8-a163-5672-ad7e-dc6f32fd08c7", 00:23:17.252 "is_configured": true, 00:23:17.252 "data_offset": 2048, 00:23:17.252 "data_size": 63488 00:23:17.252 }, 00:23:17.252 { 00:23:17.252 "name": "pt2", 00:23:17.252 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:17.252 "is_configured": true, 00:23:17.252 "data_offset": 2048, 00:23:17.252 "data_size": 63488 00:23:17.252 }, 00:23:17.252 { 00:23:17.252 "name": "pt3", 00:23:17.252 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:17.252 "is_configured": true, 00:23:17.252 "data_offset": 2048, 00:23:17.252 "data_size": 63488 00:23:17.252 } 00:23:17.252 ] 00:23:17.252 }' 00:23:17.252 13:08:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.252 13:08:35 -- common/autotest_common.sh@10 -- # set +x 00:23:17.820 13:08:36 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:17.820 13:08:36 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:18.079 [2024-06-11 13:08:36.815317] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.079 13:08:36 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c9bd3c28-3e4b-42b7-8856-862307afdeeb 00:23:18.079 13:08:36 -- bdev/bdev_raid.sh@380 -- # '[' -z c9bd3c28-3e4b-42b7-8856-862307afdeeb ']' 00:23:18.079 13:08:36 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:18.337 [2024-06-11 13:08:37.023200] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.337 [2024-06-11 13:08:37.023221] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.337 [2024-06-11 13:08:37.023294] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.337 [2024-06-11 13:08:37.023404] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.337 [2024-06-11 13:08:37.023416] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:18.337 13:08:37 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.337 13:08:37 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:18.596 13:08:37 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:18.596 13:08:37 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:18.596 13:08:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.596 13:08:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:18.855 13:08:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.855 13:08:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:18.855 13:08:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.855 13:08:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:19.114 13:08:37 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:19.114 13:08:37 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:19.372 13:08:38 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:19.372 13:08:38 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:19.372 13:08:38 -- common/autotest_common.sh@640 -- # local es=0 00:23:19.372 13:08:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:19.372 13:08:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.372 13:08:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:19.372 13:08:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.372 13:08:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:19.372 13:08:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.372 13:08:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:19.372 13:08:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.372 13:08:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:19.372 13:08:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:19.372 [2024-06-11 13:08:38.175383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:19.372 [2024-06-11 13:08:38.177330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:19.372 [2024-06-11 13:08:38.177379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:19.372 [2024-06-11 13:08:38.177451] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:19.372 [2024-06-11 13:08:38.177519] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:19.372 [2024-06-11 13:08:38.177570] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:19.372 [2024-06-11 13:08:38.177625] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:19.372 [2024-06-11 13:08:38.177645] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:23:19.372 request: 00:23:19.372 { 00:23:19.372 "name": "raid_bdev1", 00:23:19.372 "raid_level": "raid5f", 00:23:19.372 "base_bdevs": [ 00:23:19.372 "malloc1", 00:23:19.372 "malloc2", 00:23:19.372 "malloc3" 00:23:19.372 ], 00:23:19.372 "superblock": false, 00:23:19.372 "strip_size_kb": 64, 00:23:19.372 "method": "bdev_raid_create", 00:23:19.372 "req_id": 1 00:23:19.372 } 00:23:19.372 Got JSON-RPC error response 00:23:19.372 response: 00:23:19.372 { 00:23:19.372 "code": -17, 00:23:19.372 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:19.372 } 00:23:19.372 13:08:38 -- common/autotest_common.sh@643 -- # es=1 00:23:19.372 13:08:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:19.372 13:08:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:19.372 13:08:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:19.372 13:08:38 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.372 13:08:38 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:19.630 13:08:38 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:19.630 13:08:38 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:19.630 13:08:38 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:19.887 [2024-06-11 13:08:38.683393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:19.887 [2024-06-11 13:08:38.683447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.887 [2024-06-11 13:08:38.683484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:19.887 [2024-06-11 13:08:38.683504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.887 [2024-06-11 13:08:38.685524] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.887 [2024-06-11 13:08:38.685566] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:19.887 [2024-06-11 13:08:38.685663] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:19.887 [2024-06-11 13:08:38.685711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:19.887 pt1 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.887 13:08:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.888 13:08:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.888 13:08:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.146 13:08:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:20.146 "name": "raid_bdev1", 00:23:20.146 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:20.146 "strip_size_kb": 64, 00:23:20.146 "state": "configuring", 00:23:20.146 "raid_level": "raid5f", 00:23:20.146 "superblock": true, 00:23:20.146 "num_base_bdevs": 3, 00:23:20.146 "num_base_bdevs_discovered": 1, 00:23:20.146 "num_base_bdevs_operational": 3, 00:23:20.146 "base_bdevs_list": [ 00:23:20.146 { 00:23:20.146 "name": "pt1", 00:23:20.146 "uuid": "a60c69a8-a163-5672-ad7e-dc6f32fd08c7", 00:23:20.146 "is_configured": true, 00:23:20.146 "data_offset": 2048, 00:23:20.146 "data_size": 63488 00:23:20.146 }, 00:23:20.146 { 00:23:20.146 "name": null, 00:23:20.146 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:20.146 "is_configured": false, 00:23:20.146 "data_offset": 2048, 00:23:20.146 "data_size": 63488 00:23:20.146 }, 00:23:20.146 { 00:23:20.146 "name": null, 00:23:20.146 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:20.146 "is_configured": false, 00:23:20.146 "data_offset": 2048, 00:23:20.146 "data_size": 63488 00:23:20.146 } 00:23:20.146 ] 00:23:20.146 }' 00:23:20.146 13:08:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:20.146 13:08:38 -- common/autotest_common.sh@10 -- # set +x 00:23:20.712 13:08:39 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:20.712 13:08:39 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:20.970 [2024-06-11 13:08:39.771629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:20.970 [2024-06-11 13:08:39.771732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.970 [2024-06-11 13:08:39.771788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:20.970 [2024-06-11 13:08:39.771812] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.970 [2024-06-11 13:08:39.772294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.970 [2024-06-11 13:08:39.772335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:20.970 [2024-06-11 13:08:39.772453] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:20.970 [2024-06-11 13:08:39.772490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:20.970 pt2 00:23:20.970 13:08:39 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:21.228 [2024-06-11 13:08:39.963672] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:21.228 13:08:39 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:21.228 13:08:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:21.228 13:08:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:21.228 13:08:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:21.228 13:08:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:21.229 13:08:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:21.229 13:08:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.229 13:08:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.229 13:08:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.229 13:08:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.229 13:08:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.229 13:08:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.487 13:08:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.487 "name": "raid_bdev1", 00:23:21.487 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:21.487 "strip_size_kb": 64, 00:23:21.487 "state": "configuring", 00:23:21.487 "raid_level": "raid5f", 00:23:21.487 "superblock": true, 00:23:21.487 "num_base_bdevs": 3, 00:23:21.487 "num_base_bdevs_discovered": 1, 00:23:21.487 "num_base_bdevs_operational": 3, 00:23:21.487 "base_bdevs_list": [ 00:23:21.487 { 00:23:21.487 "name": "pt1", 00:23:21.487 "uuid": "a60c69a8-a163-5672-ad7e-dc6f32fd08c7", 00:23:21.487 "is_configured": true, 00:23:21.487 "data_offset": 2048, 00:23:21.487 "data_size": 63488 00:23:21.487 }, 00:23:21.487 { 00:23:21.487 "name": null, 00:23:21.487 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:21.487 "is_configured": false, 00:23:21.487 "data_offset": 2048, 00:23:21.487 "data_size": 63488 00:23:21.487 }, 00:23:21.487 { 00:23:21.487 "name": null, 00:23:21.487 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:21.487 "is_configured": false, 00:23:21.487 "data_offset": 2048, 00:23:21.487 "data_size": 63488 00:23:21.487 } 00:23:21.487 ] 00:23:21.487 }' 00:23:21.487 13:08:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.487 13:08:40 -- common/autotest_common.sh@10 -- # set +x 00:23:22.054 13:08:40 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:22.054 13:08:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:22.054 13:08:40 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:22.314 [2024-06-11 13:08:41.015975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:22.314 [2024-06-11 13:08:41.016150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.314 [2024-06-11 13:08:41.016224] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:22.314 [2024-06-11 13:08:41.016264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.314 [2024-06-11 13:08:41.016912] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.314 [2024-06-11 13:08:41.016970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:22.314 [2024-06-11 13:08:41.017100] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:22.314 [2024-06-11 13:08:41.017131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:22.314 pt2 00:23:22.314 13:08:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:22.314 13:08:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:22.314 13:08:41 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:22.573 [2024-06-11 13:08:41.199912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:22.573 [2024-06-11 13:08:41.199975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.573 [2024-06-11 13:08:41.200011] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:22.573 [2024-06-11 13:08:41.200042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.573 [2024-06-11 13:08:41.200444] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.573 [2024-06-11 13:08:41.200495] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:22.573 [2024-06-11 13:08:41.200598] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:22.573 [2024-06-11 13:08:41.200625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:22.573 [2024-06-11 13:08:41.200751] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:22.573 [2024-06-11 13:08:41.200774] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:22.573 [2024-06-11 13:08:41.200885] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:22.573 [2024-06-11 13:08:41.205148] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:22.573 [2024-06-11 13:08:41.205173] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:22.573 [2024-06-11 13:08:41.205350] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.573 pt3 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.573 13:08:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.831 13:08:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.831 "name": "raid_bdev1", 00:23:22.831 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:22.831 "strip_size_kb": 64, 00:23:22.831 "state": "online", 00:23:22.831 "raid_level": "raid5f", 00:23:22.831 "superblock": true, 00:23:22.831 "num_base_bdevs": 3, 00:23:22.831 "num_base_bdevs_discovered": 3, 00:23:22.831 "num_base_bdevs_operational": 3, 00:23:22.831 "base_bdevs_list": [ 00:23:22.831 { 00:23:22.831 "name": "pt1", 00:23:22.831 "uuid": "a60c69a8-a163-5672-ad7e-dc6f32fd08c7", 00:23:22.832 "is_configured": true, 00:23:22.832 "data_offset": 2048, 00:23:22.832 "data_size": 63488 00:23:22.832 }, 00:23:22.832 { 00:23:22.832 "name": "pt2", 00:23:22.832 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:22.832 "is_configured": true, 00:23:22.832 "data_offset": 2048, 00:23:22.832 "data_size": 63488 00:23:22.832 }, 00:23:22.832 { 00:23:22.832 "name": "pt3", 00:23:22.832 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:22.832 "is_configured": true, 00:23:22.832 "data_offset": 2048, 00:23:22.832 "data_size": 63488 00:23:22.832 } 00:23:22.832 ] 00:23:22.832 }' 00:23:22.832 13:08:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.832 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:23:23.398 13:08:42 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:23.398 13:08:42 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:23.656 [2024-06-11 13:08:42.322819] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.656 13:08:42 -- bdev/bdev_raid.sh@430 -- # '[' c9bd3c28-3e4b-42b7-8856-862307afdeeb '!=' c9bd3c28-3e4b-42b7-8856-862307afdeeb ']' 00:23:23.656 13:08:42 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:23.656 13:08:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:23.656 13:08:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:23.656 13:08:42 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:23.914 [2024-06-11 13:08:42.498671] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.914 "name": "raid_bdev1", 00:23:23.914 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:23.914 "strip_size_kb": 64, 00:23:23.914 "state": "online", 00:23:23.914 "raid_level": "raid5f", 00:23:23.914 "superblock": true, 00:23:23.914 "num_base_bdevs": 3, 00:23:23.914 "num_base_bdevs_discovered": 2, 00:23:23.914 "num_base_bdevs_operational": 2, 00:23:23.914 "base_bdevs_list": [ 00:23:23.914 { 00:23:23.914 "name": null, 00:23:23.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.914 "is_configured": false, 00:23:23.914 "data_offset": 2048, 00:23:23.914 "data_size": 63488 00:23:23.914 }, 00:23:23.914 { 00:23:23.914 "name": "pt2", 00:23:23.914 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:23.914 "is_configured": true, 00:23:23.914 "data_offset": 2048, 00:23:23.914 "data_size": 63488 00:23:23.914 }, 00:23:23.914 { 00:23:23.914 "name": "pt3", 00:23:23.914 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:23.914 "is_configured": true, 00:23:23.914 "data_offset": 2048, 00:23:23.914 "data_size": 63488 00:23:23.914 } 00:23:23.914 ] 00:23:23.914 }' 00:23:23.914 13:08:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.914 13:08:42 -- common/autotest_common.sh@10 -- # set +x 00:23:24.849 13:08:43 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:24.849 [2024-06-11 13:08:43.558152] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.849 [2024-06-11 13:08:43.558193] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.849 [2024-06-11 13:08:43.558266] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.849 [2024-06-11 13:08:43.558332] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.849 [2024-06-11 13:08:43.558344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:24.849 13:08:43 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.849 13:08:43 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:25.118 13:08:43 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:25.118 13:08:43 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:25.118 13:08:43 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:25.118 13:08:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:25.118 13:08:43 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:25.391 13:08:44 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:25.391 13:08:44 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:25.391 13:08:44 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:25.650 [2024-06-11 13:08:44.454896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:25.650 [2024-06-11 13:08:44.455443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.650 [2024-06-11 13:08:44.455664] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:25.650 [2024-06-11 13:08:44.455799] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.650 [2024-06-11 13:08:44.458274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.650 [2024-06-11 13:08:44.458433] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:25.650 [2024-06-11 13:08:44.458666] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:25.650 [2024-06-11 13:08:44.458751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:25.650 pt2 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.650 13:08:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.909 13:08:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:25.909 "name": "raid_bdev1", 00:23:25.909 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:25.909 "strip_size_kb": 64, 00:23:25.909 "state": "configuring", 00:23:25.909 "raid_level": "raid5f", 00:23:25.909 "superblock": true, 00:23:25.909 "num_base_bdevs": 3, 00:23:25.909 "num_base_bdevs_discovered": 1, 00:23:25.909 "num_base_bdevs_operational": 2, 00:23:25.909 "base_bdevs_list": [ 00:23:25.909 { 00:23:25.909 "name": null, 00:23:25.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.909 "is_configured": false, 00:23:25.909 "data_offset": 2048, 00:23:25.909 "data_size": 63488 00:23:25.909 }, 00:23:25.909 { 00:23:25.909 "name": "pt2", 00:23:25.909 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:25.909 "is_configured": true, 00:23:25.909 "data_offset": 2048, 00:23:25.909 "data_size": 63488 00:23:25.909 }, 00:23:25.909 { 00:23:25.909 "name": null, 00:23:25.909 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:25.909 "is_configured": false, 00:23:25.909 "data_offset": 2048, 00:23:25.909 "data_size": 63488 00:23:25.909 } 00:23:25.909 ] 00:23:25.909 }' 00:23:25.909 13:08:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:25.909 13:08:44 -- common/autotest_common.sh@10 -- # set +x 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:26.845 [2024-06-11 13:08:45.567157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:26.845 [2024-06-11 13:08:45.567650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.845 [2024-06-11 13:08:45.567800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:26.845 [2024-06-11 13:08:45.567916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.845 [2024-06-11 13:08:45.568516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.845 [2024-06-11 13:08:45.568660] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:26.845 [2024-06-11 13:08:45.568886] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:26.845 [2024-06-11 13:08:45.568928] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:26.845 [2024-06-11 13:08:45.569052] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:26.845 [2024-06-11 13:08:45.569074] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:26.845 [2024-06-11 13:08:45.569162] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:26.845 [2024-06-11 13:08:45.573314] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:26.845 [2024-06-11 13:08:45.573339] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:26.845 [2024-06-11 13:08:45.573667] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.845 pt3 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.845 13:08:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.104 13:08:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:27.104 "name": "raid_bdev1", 00:23:27.104 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:27.104 "strip_size_kb": 64, 00:23:27.104 "state": "online", 00:23:27.104 "raid_level": "raid5f", 00:23:27.104 "superblock": true, 00:23:27.104 "num_base_bdevs": 3, 00:23:27.104 "num_base_bdevs_discovered": 2, 00:23:27.104 "num_base_bdevs_operational": 2, 00:23:27.104 "base_bdevs_list": [ 00:23:27.104 { 00:23:27.104 "name": null, 00:23:27.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.104 "is_configured": false, 00:23:27.104 "data_offset": 2048, 00:23:27.104 "data_size": 63488 00:23:27.104 }, 00:23:27.104 { 00:23:27.104 "name": "pt2", 00:23:27.104 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:27.104 "is_configured": true, 00:23:27.104 "data_offset": 2048, 00:23:27.104 "data_size": 63488 00:23:27.104 }, 00:23:27.104 { 00:23:27.104 "name": "pt3", 00:23:27.104 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:27.104 "is_configured": true, 00:23:27.104 "data_offset": 2048, 00:23:27.104 "data_size": 63488 00:23:27.104 } 00:23:27.104 ] 00:23:27.104 }' 00:23:27.104 13:08:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:27.104 13:08:45 -- common/autotest_common.sh@10 -- # set +x 00:23:27.671 13:08:46 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:27.671 13:08:46 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:27.929 [2024-06-11 13:08:46.718827] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:27.929 [2024-06-11 13:08:46.718865] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:27.929 [2024-06-11 13:08:46.718960] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:27.930 [2024-06-11 13:08:46.719025] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:27.930 [2024-06-11 13:08:46.719037] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:27.930 13:08:46 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.930 13:08:46 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:28.188 13:08:46 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:28.188 13:08:46 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:28.188 13:08:46 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:28.446 [2024-06-11 13:08:47.114893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:28.446 [2024-06-11 13:08:47.114983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.446 [2024-06-11 13:08:47.115023] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:28.446 [2024-06-11 13:08:47.115044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.446 [2024-06-11 13:08:47.117207] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.446 [2024-06-11 13:08:47.117250] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:28.446 [2024-06-11 13:08:47.117377] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:28.446 [2024-06-11 13:08:47.117448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:28.446 pt1 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.446 13:08:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.704 13:08:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.704 "name": "raid_bdev1", 00:23:28.704 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:28.704 "strip_size_kb": 64, 00:23:28.704 "state": "configuring", 00:23:28.704 "raid_level": "raid5f", 00:23:28.704 "superblock": true, 00:23:28.704 "num_base_bdevs": 3, 00:23:28.704 "num_base_bdevs_discovered": 1, 00:23:28.704 "num_base_bdevs_operational": 3, 00:23:28.704 "base_bdevs_list": [ 00:23:28.704 { 00:23:28.704 "name": "pt1", 00:23:28.704 "uuid": "a60c69a8-a163-5672-ad7e-dc6f32fd08c7", 00:23:28.704 "is_configured": true, 00:23:28.704 "data_offset": 2048, 00:23:28.704 "data_size": 63488 00:23:28.704 }, 00:23:28.704 { 00:23:28.704 "name": null, 00:23:28.704 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:28.704 "is_configured": false, 00:23:28.704 "data_offset": 2048, 00:23:28.704 "data_size": 63488 00:23:28.704 }, 00:23:28.704 { 00:23:28.704 "name": null, 00:23:28.704 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:28.704 "is_configured": false, 00:23:28.704 "data_offset": 2048, 00:23:28.704 "data_size": 63488 00:23:28.704 } 00:23:28.704 ] 00:23:28.704 }' 00:23:28.704 13:08:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.704 13:08:47 -- common/autotest_common.sh@10 -- # set +x 00:23:29.270 13:08:47 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:29.270 13:08:47 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:29.270 13:08:47 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:29.528 13:08:48 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:29.528 13:08:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:29.528 13:08:48 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:29.786 [2024-06-11 13:08:48.559196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:29.786 [2024-06-11 13:08:48.559258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.786 [2024-06-11 13:08:48.559290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:29.786 [2024-06-11 13:08:48.559320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.786 [2024-06-11 13:08:48.559736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.786 [2024-06-11 13:08:48.559767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:29.786 [2024-06-11 13:08:48.559861] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:29.786 [2024-06-11 13:08:48.559873] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:29.786 [2024-06-11 13:08:48.559879] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:29.786 [2024-06-11 13:08:48.559901] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:23:29.786 [2024-06-11 13:08:48.559970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:29.786 pt3 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.786 13:08:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.045 13:08:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:30.045 "name": "raid_bdev1", 00:23:30.045 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:30.045 "strip_size_kb": 64, 00:23:30.045 "state": "configuring", 00:23:30.045 "raid_level": "raid5f", 00:23:30.045 "superblock": true, 00:23:30.045 "num_base_bdevs": 3, 00:23:30.045 "num_base_bdevs_discovered": 1, 00:23:30.045 "num_base_bdevs_operational": 2, 00:23:30.045 "base_bdevs_list": [ 00:23:30.045 { 00:23:30.045 "name": null, 00:23:30.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.045 "is_configured": false, 00:23:30.045 "data_offset": 2048, 00:23:30.045 "data_size": 63488 00:23:30.045 }, 00:23:30.045 { 00:23:30.045 "name": null, 00:23:30.045 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:30.045 "is_configured": false, 00:23:30.045 "data_offset": 2048, 00:23:30.045 "data_size": 63488 00:23:30.045 }, 00:23:30.045 { 00:23:30.045 "name": "pt3", 00:23:30.045 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:30.045 "is_configured": true, 00:23:30.045 "data_offset": 2048, 00:23:30.045 "data_size": 63488 00:23:30.045 } 00:23:30.045 ] 00:23:30.045 }' 00:23:30.045 13:08:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:30.045 13:08:48 -- common/autotest_common.sh@10 -- # set +x 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:30.981 [2024-06-11 13:08:49.715559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:30.981 [2024-06-11 13:08:49.715667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.981 [2024-06-11 13:08:49.715713] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:30.981 [2024-06-11 13:08:49.715749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.981 [2024-06-11 13:08:49.716420] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.981 [2024-06-11 13:08:49.716475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:30.981 [2024-06-11 13:08:49.716605] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:30.981 [2024-06-11 13:08:49.716669] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:30.981 [2024-06-11 13:08:49.716832] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:23:30.981 [2024-06-11 13:08:49.716850] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:30.981 [2024-06-11 13:08:49.716978] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:30.981 [2024-06-11 13:08:49.723159] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:23:30.981 [2024-06-11 13:08:49.723193] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:23:30.981 pt2 00:23:30.981 [2024-06-11 13:08:49.723490] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.981 13:08:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.239 13:08:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.239 "name": "raid_bdev1", 00:23:31.239 "uuid": "c9bd3c28-3e4b-42b7-8856-862307afdeeb", 00:23:31.239 "strip_size_kb": 64, 00:23:31.239 "state": "online", 00:23:31.239 "raid_level": "raid5f", 00:23:31.239 "superblock": true, 00:23:31.239 "num_base_bdevs": 3, 00:23:31.239 "num_base_bdevs_discovered": 2, 00:23:31.239 "num_base_bdevs_operational": 2, 00:23:31.239 "base_bdevs_list": [ 00:23:31.239 { 00:23:31.239 "name": null, 00:23:31.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.239 "is_configured": false, 00:23:31.239 "data_offset": 2048, 00:23:31.239 "data_size": 63488 00:23:31.239 }, 00:23:31.239 { 00:23:31.239 "name": "pt2", 00:23:31.239 "uuid": "2b624d50-0d75-5168-8a03-aa3e3448179c", 00:23:31.239 "is_configured": true, 00:23:31.239 "data_offset": 2048, 00:23:31.239 "data_size": 63488 00:23:31.239 }, 00:23:31.239 { 00:23:31.239 "name": "pt3", 00:23:31.239 "uuid": "51c4015f-874c-5e02-ae32-78456101aa9b", 00:23:31.239 "is_configured": true, 00:23:31.239 "data_offset": 2048, 00:23:31.239 "data_size": 63488 00:23:31.239 } 00:23:31.239 ] 00:23:31.239 }' 00:23:31.239 13:08:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.239 13:08:50 -- common/autotest_common.sh@10 -- # set +x 00:23:32.169 13:08:50 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:32.169 13:08:50 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:32.169 [2024-06-11 13:08:50.854518] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.169 13:08:50 -- bdev/bdev_raid.sh@506 -- # '[' c9bd3c28-3e4b-42b7-8856-862307afdeeb '!=' c9bd3c28-3e4b-42b7-8856-862307afdeeb ']' 00:23:32.169 13:08:50 -- bdev/bdev_raid.sh@511 -- # killprocess 131346 00:23:32.169 13:08:50 -- common/autotest_common.sh@926 -- # '[' -z 131346 ']' 00:23:32.169 13:08:50 -- common/autotest_common.sh@930 -- # kill -0 131346 00:23:32.169 13:08:50 -- common/autotest_common.sh@931 -- # uname 00:23:32.169 13:08:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:32.169 13:08:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131346 00:23:32.169 killing process with pid 131346 00:23:32.170 13:08:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:32.170 13:08:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:32.170 13:08:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131346' 00:23:32.170 13:08:50 -- common/autotest_common.sh@945 -- # kill 131346 00:23:32.170 13:08:50 -- common/autotest_common.sh@950 -- # wait 131346 00:23:32.170 [2024-06-11 13:08:50.886777] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:32.170 [2024-06-11 13:08:50.886842] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.170 [2024-06-11 13:08:50.886931] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.170 [2024-06-11 13:08:50.886943] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:23:32.428 [2024-06-11 13:08:51.083947] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:33.362 ************************************ 00:23:33.362 END TEST raid5f_superblock_test 00:23:33.362 ************************************ 00:23:33.362 13:08:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:33.362 00:23:33.362 real 0m18.716s 00:23:33.362 user 0m34.641s 00:23:33.362 sys 0m2.024s 00:23:33.363 13:08:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.363 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:33.363 13:08:52 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:33.363 13:08:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:33.363 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.363 ************************************ 00:23:33.363 START TEST raid5f_rebuild_test 00:23:33.363 ************************************ 00:23:33.363 13:08:52 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@544 -- # raid_pid=131977 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131977 /var/tmp/spdk-raid.sock 00:23:33.363 13:08:52 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:33.363 13:08:52 -- common/autotest_common.sh@819 -- # '[' -z 131977 ']' 00:23:33.363 13:08:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:33.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:33.363 13:08:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:33.363 13:08:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:33.363 13:08:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:33.363 13:08:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.363 [2024-06-11 13:08:52.130315] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:33.363 [2024-06-11 13:08:52.131081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131977 ] 00:23:33.363 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:33.363 Zero copy mechanism will not be used. 00:23:33.620 [2024-06-11 13:08:52.298669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.878 [2024-06-11 13:08:52.493901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.878 [2024-06-11 13:08:52.667219] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:34.443 13:08:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:34.443 13:08:53 -- common/autotest_common.sh@852 -- # return 0 00:23:34.443 13:08:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:34.443 13:08:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:34.443 13:08:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:34.701 BaseBdev1 00:23:34.701 13:08:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:34.701 13:08:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:34.701 13:08:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.701 BaseBdev2 00:23:34.958 13:08:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:34.958 13:08:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:34.958 13:08:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:34.958 BaseBdev3 00:23:34.958 13:08:53 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:35.215 spare_malloc 00:23:35.215 13:08:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:35.473 spare_delay 00:23:35.473 13:08:54 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:35.730 [2024-06-11 13:08:54.446875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:35.730 [2024-06-11 13:08:54.446960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.730 [2024-06-11 13:08:54.446990] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:35.730 [2024-06-11 13:08:54.447028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.730 [2024-06-11 13:08:54.449003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.730 [2024-06-11 13:08:54.449065] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:35.730 spare 00:23:35.730 13:08:54 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:35.988 [2024-06-11 13:08:54.638967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.988 [2024-06-11 13:08:54.640799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.988 [2024-06-11 13:08:54.640869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:35.988 [2024-06-11 13:08:54.640952] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:35.988 [2024-06-11 13:08:54.640966] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:35.988 [2024-06-11 13:08:54.641152] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:35.988 [2024-06-11 13:08:54.645594] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:35.988 [2024-06-11 13:08:54.645618] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:35.988 [2024-06-11 13:08:54.645848] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.988 13:08:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.245 13:08:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:36.245 "name": "raid_bdev1", 00:23:36.245 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:36.245 "strip_size_kb": 64, 00:23:36.245 "state": "online", 00:23:36.245 "raid_level": "raid5f", 00:23:36.245 "superblock": false, 00:23:36.245 "num_base_bdevs": 3, 00:23:36.245 "num_base_bdevs_discovered": 3, 00:23:36.245 "num_base_bdevs_operational": 3, 00:23:36.245 "base_bdevs_list": [ 00:23:36.245 { 00:23:36.245 "name": "BaseBdev1", 00:23:36.245 "uuid": "cd76783a-8b65-480f-8909-78240559b108", 00:23:36.245 "is_configured": true, 00:23:36.245 "data_offset": 0, 00:23:36.245 "data_size": 65536 00:23:36.245 }, 00:23:36.245 { 00:23:36.245 "name": "BaseBdev2", 00:23:36.245 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:36.245 "is_configured": true, 00:23:36.245 "data_offset": 0, 00:23:36.245 "data_size": 65536 00:23:36.245 }, 00:23:36.245 { 00:23:36.245 "name": "BaseBdev3", 00:23:36.245 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:36.245 "is_configured": true, 00:23:36.245 "data_offset": 0, 00:23:36.245 "data_size": 65536 00:23:36.245 } 00:23:36.245 ] 00:23:36.245 }' 00:23:36.245 13:08:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:36.245 13:08:54 -- common/autotest_common.sh@10 -- # set +x 00:23:36.810 13:08:55 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:36.810 13:08:55 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:37.068 [2024-06-11 13:08:55.739113] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.068 13:08:55 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:23:37.068 13:08:55 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.068 13:08:55 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:37.327 13:08:55 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:37.327 13:08:55 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:37.327 13:08:55 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:37.327 13:08:55 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@12 -- # local i 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:37.327 13:08:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:37.585 [2024-06-11 13:08:56.219195] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:37.586 /dev/nbd0 00:23:37.586 13:08:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:37.586 13:08:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:37.586 13:08:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:37.586 13:08:56 -- common/autotest_common.sh@857 -- # local i 00:23:37.586 13:08:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:37.586 13:08:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:37.586 13:08:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:37.586 13:08:56 -- common/autotest_common.sh@861 -- # break 00:23:37.586 13:08:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:37.586 13:08:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:37.586 13:08:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:37.586 1+0 records in 00:23:37.586 1+0 records out 00:23:37.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181752 s, 22.5 MB/s 00:23:37.586 13:08:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:37.586 13:08:56 -- common/autotest_common.sh@874 -- # size=4096 00:23:37.586 13:08:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:37.586 13:08:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:37.586 13:08:56 -- common/autotest_common.sh@877 -- # return 0 00:23:37.586 13:08:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:37.586 13:08:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:37.586 13:08:56 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:37.586 13:08:56 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:37.586 13:08:56 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:37.586 13:08:56 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:23:37.848 512+0 records in 00:23:37.848 512+0 records out 00:23:37.848 67108864 bytes (67 MB, 64 MiB) copied, 0.380308 s, 176 MB/s 00:23:37.848 13:08:56 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:37.848 13:08:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:37.848 13:08:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:37.848 13:08:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:37.848 13:08:56 -- bdev/nbd_common.sh@51 -- # local i 00:23:37.848 13:08:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.848 13:08:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:38.110 13:08:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:38.110 13:08:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:38.110 13:08:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:38.110 13:08:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.110 13:08:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.110 13:08:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:38.110 13:08:56 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:38.110 [2024-06-11 13:08:56.880752] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.368 13:08:56 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:38.368 13:08:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.368 13:08:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:38.368 13:08:56 -- bdev/nbd_common.sh@41 -- # break 00:23:38.368 13:08:56 -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.368 13:08:56 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:38.626 [2024-06-11 13:08:57.230330] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.626 13:08:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.884 13:08:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.884 "name": "raid_bdev1", 00:23:38.884 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:38.884 "strip_size_kb": 64, 00:23:38.884 "state": "online", 00:23:38.884 "raid_level": "raid5f", 00:23:38.884 "superblock": false, 00:23:38.884 "num_base_bdevs": 3, 00:23:38.884 "num_base_bdevs_discovered": 2, 00:23:38.884 "num_base_bdevs_operational": 2, 00:23:38.884 "base_bdevs_list": [ 00:23:38.884 { 00:23:38.884 "name": null, 00:23:38.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.884 "is_configured": false, 00:23:38.884 "data_offset": 0, 00:23:38.884 "data_size": 65536 00:23:38.884 }, 00:23:38.884 { 00:23:38.884 "name": "BaseBdev2", 00:23:38.884 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:38.884 "is_configured": true, 00:23:38.884 "data_offset": 0, 00:23:38.884 "data_size": 65536 00:23:38.884 }, 00:23:38.884 { 00:23:38.884 "name": "BaseBdev3", 00:23:38.884 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:38.884 "is_configured": true, 00:23:38.884 "data_offset": 0, 00:23:38.884 "data_size": 65536 00:23:38.884 } 00:23:38.884 ] 00:23:38.884 }' 00:23:38.884 13:08:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.884 13:08:57 -- common/autotest_common.sh@10 -- # set +x 00:23:39.450 13:08:58 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:39.708 [2024-06-11 13:08:58.370520] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:39.708 [2024-06-11 13:08:58.370570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.708 [2024-06-11 13:08:58.381790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:23:39.708 [2024-06-11 13:08:58.387331] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:39.708 13:08:58 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:40.643 13:08:59 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.643 13:08:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:40.643 13:08:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:40.643 13:08:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:40.643 13:08:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:40.643 13:08:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.643 13:08:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.901 13:08:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:40.901 "name": "raid_bdev1", 00:23:40.901 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:40.901 "strip_size_kb": 64, 00:23:40.901 "state": "online", 00:23:40.901 "raid_level": "raid5f", 00:23:40.901 "superblock": false, 00:23:40.901 "num_base_bdevs": 3, 00:23:40.901 "num_base_bdevs_discovered": 3, 00:23:40.901 "num_base_bdevs_operational": 3, 00:23:40.901 "process": { 00:23:40.901 "type": "rebuild", 00:23:40.901 "target": "spare", 00:23:40.901 "progress": { 00:23:40.901 "blocks": 22528, 00:23:40.901 "percent": 17 00:23:40.901 } 00:23:40.901 }, 00:23:40.901 "base_bdevs_list": [ 00:23:40.901 { 00:23:40.901 "name": "spare", 00:23:40.901 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:40.901 "is_configured": true, 00:23:40.901 "data_offset": 0, 00:23:40.901 "data_size": 65536 00:23:40.901 }, 00:23:40.901 { 00:23:40.901 "name": "BaseBdev2", 00:23:40.901 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:40.901 "is_configured": true, 00:23:40.901 "data_offset": 0, 00:23:40.901 "data_size": 65536 00:23:40.901 }, 00:23:40.901 { 00:23:40.901 "name": "BaseBdev3", 00:23:40.901 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:40.901 "is_configured": true, 00:23:40.901 "data_offset": 0, 00:23:40.901 "data_size": 65536 00:23:40.901 } 00:23:40.901 ] 00:23:40.901 }' 00:23:40.901 13:08:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:40.901 13:08:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.901 13:08:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:40.901 13:08:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.901 13:08:59 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:41.159 [2024-06-11 13:08:59.933060] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.418 [2024-06-11 13:09:00.000279] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:41.418 [2024-06-11 13:09:00.000374] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:41.418 "name": "raid_bdev1", 00:23:41.418 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:41.418 "strip_size_kb": 64, 00:23:41.418 "state": "online", 00:23:41.418 "raid_level": "raid5f", 00:23:41.418 "superblock": false, 00:23:41.418 "num_base_bdevs": 3, 00:23:41.418 "num_base_bdevs_discovered": 2, 00:23:41.418 "num_base_bdevs_operational": 2, 00:23:41.418 "base_bdevs_list": [ 00:23:41.418 { 00:23:41.418 "name": null, 00:23:41.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.418 "is_configured": false, 00:23:41.418 "data_offset": 0, 00:23:41.418 "data_size": 65536 00:23:41.418 }, 00:23:41.418 { 00:23:41.418 "name": "BaseBdev2", 00:23:41.418 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:41.418 "is_configured": true, 00:23:41.418 "data_offset": 0, 00:23:41.418 "data_size": 65536 00:23:41.418 }, 00:23:41.418 { 00:23:41.418 "name": "BaseBdev3", 00:23:41.418 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:41.418 "is_configured": true, 00:23:41.418 "data_offset": 0, 00:23:41.418 "data_size": 65536 00:23:41.418 } 00:23:41.418 ] 00:23:41.418 }' 00:23:41.418 13:09:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:41.418 13:09:00 -- common/autotest_common.sh@10 -- # set +x 00:23:42.352 13:09:00 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.352 13:09:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:42.352 13:09:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:42.352 13:09:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:42.352 13:09:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:42.352 13:09:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.352 13:09:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.352 13:09:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:42.352 "name": "raid_bdev1", 00:23:42.352 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:42.352 "strip_size_kb": 64, 00:23:42.352 "state": "online", 00:23:42.352 "raid_level": "raid5f", 00:23:42.352 "superblock": false, 00:23:42.352 "num_base_bdevs": 3, 00:23:42.352 "num_base_bdevs_discovered": 2, 00:23:42.352 "num_base_bdevs_operational": 2, 00:23:42.352 "base_bdevs_list": [ 00:23:42.352 { 00:23:42.352 "name": null, 00:23:42.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.352 "is_configured": false, 00:23:42.352 "data_offset": 0, 00:23:42.352 "data_size": 65536 00:23:42.352 }, 00:23:42.352 { 00:23:42.352 "name": "BaseBdev2", 00:23:42.352 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:42.352 "is_configured": true, 00:23:42.352 "data_offset": 0, 00:23:42.352 "data_size": 65536 00:23:42.352 }, 00:23:42.352 { 00:23:42.352 "name": "BaseBdev3", 00:23:42.352 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:42.352 "is_configured": true, 00:23:42.352 "data_offset": 0, 00:23:42.352 "data_size": 65536 00:23:42.352 } 00:23:42.353 ] 00:23:42.353 }' 00:23:42.353 13:09:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:42.353 13:09:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:42.353 13:09:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:42.610 13:09:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:42.611 13:09:01 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:42.611 [2024-06-11 13:09:01.406364] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:42.611 [2024-06-11 13:09:01.406424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.611 [2024-06-11 13:09:01.417403] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:23:42.611 [2024-06-11 13:09:01.422904] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:42.611 13:09:01 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:43.981 "name": "raid_bdev1", 00:23:43.981 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:43.981 "strip_size_kb": 64, 00:23:43.981 "state": "online", 00:23:43.981 "raid_level": "raid5f", 00:23:43.981 "superblock": false, 00:23:43.981 "num_base_bdevs": 3, 00:23:43.981 "num_base_bdevs_discovered": 3, 00:23:43.981 "num_base_bdevs_operational": 3, 00:23:43.981 "process": { 00:23:43.981 "type": "rebuild", 00:23:43.981 "target": "spare", 00:23:43.981 "progress": { 00:23:43.981 "blocks": 24576, 00:23:43.981 "percent": 18 00:23:43.981 } 00:23:43.981 }, 00:23:43.981 "base_bdevs_list": [ 00:23:43.981 { 00:23:43.981 "name": "spare", 00:23:43.981 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:43.981 "is_configured": true, 00:23:43.981 "data_offset": 0, 00:23:43.981 "data_size": 65536 00:23:43.981 }, 00:23:43.981 { 00:23:43.981 "name": "BaseBdev2", 00:23:43.981 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:43.981 "is_configured": true, 00:23:43.981 "data_offset": 0, 00:23:43.981 "data_size": 65536 00:23:43.981 }, 00:23:43.981 { 00:23:43.981 "name": "BaseBdev3", 00:23:43.981 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:43.981 "is_configured": true, 00:23:43.981 "data_offset": 0, 00:23:43.981 "data_size": 65536 00:23:43.981 } 00:23:43.981 ] 00:23:43.981 }' 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@657 -- # local timeout=614 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.981 13:09:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.239 13:09:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:44.239 "name": "raid_bdev1", 00:23:44.239 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:44.239 "strip_size_kb": 64, 00:23:44.239 "state": "online", 00:23:44.239 "raid_level": "raid5f", 00:23:44.239 "superblock": false, 00:23:44.239 "num_base_bdevs": 3, 00:23:44.239 "num_base_bdevs_discovered": 3, 00:23:44.239 "num_base_bdevs_operational": 3, 00:23:44.239 "process": { 00:23:44.239 "type": "rebuild", 00:23:44.239 "target": "spare", 00:23:44.239 "progress": { 00:23:44.239 "blocks": 30720, 00:23:44.239 "percent": 23 00:23:44.239 } 00:23:44.239 }, 00:23:44.239 "base_bdevs_list": [ 00:23:44.239 { 00:23:44.239 "name": "spare", 00:23:44.239 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:44.239 "is_configured": true, 00:23:44.239 "data_offset": 0, 00:23:44.239 "data_size": 65536 00:23:44.239 }, 00:23:44.239 { 00:23:44.239 "name": "BaseBdev2", 00:23:44.239 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:44.239 "is_configured": true, 00:23:44.239 "data_offset": 0, 00:23:44.239 "data_size": 65536 00:23:44.239 }, 00:23:44.239 { 00:23:44.239 "name": "BaseBdev3", 00:23:44.239 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:44.239 "is_configured": true, 00:23:44.239 "data_offset": 0, 00:23:44.239 "data_size": 65536 00:23:44.239 } 00:23:44.239 ] 00:23:44.239 }' 00:23:44.239 13:09:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:44.239 13:09:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.239 13:09:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:44.497 13:09:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.497 13:09:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:45.431 13:09:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:45.431 13:09:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.431 13:09:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:45.431 13:09:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:45.431 13:09:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:45.431 13:09:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:45.431 13:09:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.431 13:09:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.689 13:09:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:45.689 "name": "raid_bdev1", 00:23:45.689 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:45.689 "strip_size_kb": 64, 00:23:45.689 "state": "online", 00:23:45.689 "raid_level": "raid5f", 00:23:45.689 "superblock": false, 00:23:45.689 "num_base_bdevs": 3, 00:23:45.689 "num_base_bdevs_discovered": 3, 00:23:45.690 "num_base_bdevs_operational": 3, 00:23:45.690 "process": { 00:23:45.690 "type": "rebuild", 00:23:45.690 "target": "spare", 00:23:45.690 "progress": { 00:23:45.690 "blocks": 57344, 00:23:45.690 "percent": 43 00:23:45.690 } 00:23:45.690 }, 00:23:45.690 "base_bdevs_list": [ 00:23:45.690 { 00:23:45.690 "name": "spare", 00:23:45.690 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:45.690 "is_configured": true, 00:23:45.690 "data_offset": 0, 00:23:45.690 "data_size": 65536 00:23:45.690 }, 00:23:45.690 { 00:23:45.690 "name": "BaseBdev2", 00:23:45.690 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:45.690 "is_configured": true, 00:23:45.690 "data_offset": 0, 00:23:45.690 "data_size": 65536 00:23:45.690 }, 00:23:45.690 { 00:23:45.690 "name": "BaseBdev3", 00:23:45.690 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:45.690 "is_configured": true, 00:23:45.690 "data_offset": 0, 00:23:45.690 "data_size": 65536 00:23:45.690 } 00:23:45.690 ] 00:23:45.690 }' 00:23:45.690 13:09:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:45.690 13:09:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.690 13:09:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:45.690 13:09:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.690 13:09:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:46.623 13:09:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:46.623 13:09:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.623 13:09:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:46.623 13:09:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:46.623 13:09:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:46.623 13:09:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:46.623 13:09:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.623 13:09:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.881 13:09:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:46.881 "name": "raid_bdev1", 00:23:46.881 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:46.881 "strip_size_kb": 64, 00:23:46.881 "state": "online", 00:23:46.881 "raid_level": "raid5f", 00:23:46.881 "superblock": false, 00:23:46.881 "num_base_bdevs": 3, 00:23:46.881 "num_base_bdevs_discovered": 3, 00:23:46.881 "num_base_bdevs_operational": 3, 00:23:46.881 "process": { 00:23:46.881 "type": "rebuild", 00:23:46.881 "target": "spare", 00:23:46.881 "progress": { 00:23:46.881 "blocks": 83968, 00:23:46.881 "percent": 64 00:23:46.881 } 00:23:46.881 }, 00:23:46.881 "base_bdevs_list": [ 00:23:46.881 { 00:23:46.881 "name": "spare", 00:23:46.881 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:46.881 "is_configured": true, 00:23:46.881 "data_offset": 0, 00:23:46.881 "data_size": 65536 00:23:46.881 }, 00:23:46.881 { 00:23:46.881 "name": "BaseBdev2", 00:23:46.881 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:46.881 "is_configured": true, 00:23:46.881 "data_offset": 0, 00:23:46.881 "data_size": 65536 00:23:46.881 }, 00:23:46.881 { 00:23:46.881 "name": "BaseBdev3", 00:23:46.881 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:46.881 "is_configured": true, 00:23:46.881 "data_offset": 0, 00:23:46.881 "data_size": 65536 00:23:46.881 } 00:23:46.881 ] 00:23:46.881 }' 00:23:46.881 13:09:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:46.881 13:09:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:46.881 13:09:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.139 13:09:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.139 13:09:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:48.073 13:09:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:48.073 13:09:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.073 13:09:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:48.073 13:09:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:48.073 13:09:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:48.073 13:09:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:48.073 13:09:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.073 13:09:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.331 13:09:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:48.331 "name": "raid_bdev1", 00:23:48.331 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:48.331 "strip_size_kb": 64, 00:23:48.331 "state": "online", 00:23:48.331 "raid_level": "raid5f", 00:23:48.331 "superblock": false, 00:23:48.331 "num_base_bdevs": 3, 00:23:48.331 "num_base_bdevs_discovered": 3, 00:23:48.331 "num_base_bdevs_operational": 3, 00:23:48.331 "process": { 00:23:48.331 "type": "rebuild", 00:23:48.331 "target": "spare", 00:23:48.331 "progress": { 00:23:48.331 "blocks": 112640, 00:23:48.331 "percent": 85 00:23:48.331 } 00:23:48.331 }, 00:23:48.331 "base_bdevs_list": [ 00:23:48.331 { 00:23:48.331 "name": "spare", 00:23:48.331 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:48.331 "is_configured": true, 00:23:48.331 "data_offset": 0, 00:23:48.331 "data_size": 65536 00:23:48.331 }, 00:23:48.331 { 00:23:48.331 "name": "BaseBdev2", 00:23:48.331 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:48.331 "is_configured": true, 00:23:48.331 "data_offset": 0, 00:23:48.331 "data_size": 65536 00:23:48.331 }, 00:23:48.331 { 00:23:48.331 "name": "BaseBdev3", 00:23:48.331 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:48.331 "is_configured": true, 00:23:48.331 "data_offset": 0, 00:23:48.331 "data_size": 65536 00:23:48.331 } 00:23:48.331 ] 00:23:48.331 }' 00:23:48.331 13:09:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:48.331 13:09:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.331 13:09:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:48.331 13:09:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.331 13:09:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:49.265 [2024-06-11 13:09:07.873379] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:49.265 [2024-06-11 13:09:07.873485] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:49.265 [2024-06-11 13:09:07.873574] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.265 13:09:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:49.265 13:09:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.265 13:09:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.265 13:09:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:49.265 13:09:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:49.265 13:09:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.265 13:09:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.265 13:09:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.523 13:09:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:49.523 "name": "raid_bdev1", 00:23:49.523 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:49.523 "strip_size_kb": 64, 00:23:49.523 "state": "online", 00:23:49.523 "raid_level": "raid5f", 00:23:49.523 "superblock": false, 00:23:49.523 "num_base_bdevs": 3, 00:23:49.523 "num_base_bdevs_discovered": 3, 00:23:49.523 "num_base_bdevs_operational": 3, 00:23:49.523 "base_bdevs_list": [ 00:23:49.523 { 00:23:49.523 "name": "spare", 00:23:49.523 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:49.523 "is_configured": true, 00:23:49.523 "data_offset": 0, 00:23:49.523 "data_size": 65536 00:23:49.523 }, 00:23:49.523 { 00:23:49.523 "name": "BaseBdev2", 00:23:49.523 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:49.523 "is_configured": true, 00:23:49.523 "data_offset": 0, 00:23:49.523 "data_size": 65536 00:23:49.523 }, 00:23:49.523 { 00:23:49.523 "name": "BaseBdev3", 00:23:49.523 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:49.523 "is_configured": true, 00:23:49.523 "data_offset": 0, 00:23:49.523 "data_size": 65536 00:23:49.523 } 00:23:49.523 ] 00:23:49.523 }' 00:23:49.523 13:09:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@660 -- # break 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.781 13:09:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:50.039 "name": "raid_bdev1", 00:23:50.039 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:50.039 "strip_size_kb": 64, 00:23:50.039 "state": "online", 00:23:50.039 "raid_level": "raid5f", 00:23:50.039 "superblock": false, 00:23:50.039 "num_base_bdevs": 3, 00:23:50.039 "num_base_bdevs_discovered": 3, 00:23:50.039 "num_base_bdevs_operational": 3, 00:23:50.039 "base_bdevs_list": [ 00:23:50.039 { 00:23:50.039 "name": "spare", 00:23:50.039 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:50.039 "is_configured": true, 00:23:50.039 "data_offset": 0, 00:23:50.039 "data_size": 65536 00:23:50.039 }, 00:23:50.039 { 00:23:50.039 "name": "BaseBdev2", 00:23:50.039 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:50.039 "is_configured": true, 00:23:50.039 "data_offset": 0, 00:23:50.039 "data_size": 65536 00:23:50.039 }, 00:23:50.039 { 00:23:50.039 "name": "BaseBdev3", 00:23:50.039 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:50.039 "is_configured": true, 00:23:50.039 "data_offset": 0, 00:23:50.039 "data_size": 65536 00:23:50.039 } 00:23:50.039 ] 00:23:50.039 }' 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.039 13:09:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.299 13:09:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.299 "name": "raid_bdev1", 00:23:50.299 "uuid": "4849923a-6b13-43a2-8ebb-da1ec472bbab", 00:23:50.299 "strip_size_kb": 64, 00:23:50.299 "state": "online", 00:23:50.299 "raid_level": "raid5f", 00:23:50.299 "superblock": false, 00:23:50.299 "num_base_bdevs": 3, 00:23:50.299 "num_base_bdevs_discovered": 3, 00:23:50.299 "num_base_bdevs_operational": 3, 00:23:50.299 "base_bdevs_list": [ 00:23:50.299 { 00:23:50.299 "name": "spare", 00:23:50.299 "uuid": "9c320c53-b685-575d-b71f-5596883fd965", 00:23:50.299 "is_configured": true, 00:23:50.299 "data_offset": 0, 00:23:50.299 "data_size": 65536 00:23:50.299 }, 00:23:50.299 { 00:23:50.299 "name": "BaseBdev2", 00:23:50.299 "uuid": "ad78daf9-e3a2-4d0a-8b14-17d68074a421", 00:23:50.299 "is_configured": true, 00:23:50.299 "data_offset": 0, 00:23:50.299 "data_size": 65536 00:23:50.299 }, 00:23:50.299 { 00:23:50.299 "name": "BaseBdev3", 00:23:50.299 "uuid": "8336896c-6b62-4a06-803b-d80db568bf3c", 00:23:50.299 "is_configured": true, 00:23:50.299 "data_offset": 0, 00:23:50.299 "data_size": 65536 00:23:50.299 } 00:23:50.299 ] 00:23:50.299 }' 00:23:50.299 13:09:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.299 13:09:09 -- common/autotest_common.sh@10 -- # set +x 00:23:50.888 13:09:09 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:51.146 [2024-06-11 13:09:09.906823] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:51.147 [2024-06-11 13:09:09.906894] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:51.147 [2024-06-11 13:09:09.907023] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:51.147 [2024-06-11 13:09:09.907117] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:51.147 [2024-06-11 13:09:09.907131] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:51.147 13:09:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.147 13:09:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:51.404 13:09:10 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:51.404 13:09:10 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:51.404 13:09:10 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@12 -- # local i 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:51.404 13:09:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:51.663 /dev/nbd0 00:23:51.663 13:09:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:51.663 13:09:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:51.663 13:09:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:51.663 13:09:10 -- common/autotest_common.sh@857 -- # local i 00:23:51.663 13:09:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:51.663 13:09:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:51.663 13:09:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:51.663 13:09:10 -- common/autotest_common.sh@861 -- # break 00:23:51.663 13:09:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:51.663 13:09:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:51.663 13:09:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:51.663 1+0 records in 00:23:51.663 1+0 records out 00:23:51.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158335 s, 25.9 MB/s 00:23:51.663 13:09:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.663 13:09:10 -- common/autotest_common.sh@874 -- # size=4096 00:23:51.663 13:09:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.663 13:09:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:51.663 13:09:10 -- common/autotest_common.sh@877 -- # return 0 00:23:51.663 13:09:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.663 13:09:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:51.663 13:09:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:51.922 /dev/nbd1 00:23:51.922 13:09:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:51.922 13:09:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:51.922 13:09:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:51.922 13:09:10 -- common/autotest_common.sh@857 -- # local i 00:23:51.922 13:09:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:51.922 13:09:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:51.922 13:09:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:51.922 13:09:10 -- common/autotest_common.sh@861 -- # break 00:23:51.922 13:09:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:51.922 13:09:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:51.922 13:09:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:51.922 1+0 records in 00:23:51.922 1+0 records out 00:23:51.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033728 s, 12.1 MB/s 00:23:51.922 13:09:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.922 13:09:10 -- common/autotest_common.sh@874 -- # size=4096 00:23:51.922 13:09:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.922 13:09:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:51.922 13:09:10 -- common/autotest_common.sh@877 -- # return 0 00:23:51.922 13:09:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.922 13:09:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:51.922 13:09:10 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:52.180 13:09:10 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:52.180 13:09:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:52.180 13:09:10 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:52.180 13:09:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:52.180 13:09:10 -- bdev/nbd_common.sh@51 -- # local i 00:23:52.180 13:09:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:52.180 13:09:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:52.438 13:09:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:52.438 13:09:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:52.438 13:09:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:52.438 13:09:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:52.438 13:09:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.438 13:09:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:52.438 13:09:11 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@41 -- # break 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:52.696 13:09:11 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:52.954 13:09:11 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:52.954 13:09:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.954 13:09:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:52.954 13:09:11 -- bdev/nbd_common.sh@41 -- # break 00:23:52.954 13:09:11 -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.954 13:09:11 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:52.954 13:09:11 -- bdev/bdev_raid.sh@709 -- # killprocess 131977 00:23:52.954 13:09:11 -- common/autotest_common.sh@926 -- # '[' -z 131977 ']' 00:23:52.954 13:09:11 -- common/autotest_common.sh@930 -- # kill -0 131977 00:23:52.954 13:09:11 -- common/autotest_common.sh@931 -- # uname 00:23:52.954 13:09:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:52.954 13:09:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131977 00:23:52.954 killing process with pid 131977 00:23:52.954 Received shutdown signal, test time was about 60.000000 seconds 00:23:52.954 00:23:52.954 Latency(us) 00:23:52.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.954 =================================================================================================================== 00:23:52.954 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.954 13:09:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:52.954 13:09:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:52.954 13:09:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131977' 00:23:52.954 13:09:11 -- common/autotest_common.sh@945 -- # kill 131977 00:23:52.954 13:09:11 -- common/autotest_common.sh@950 -- # wait 131977 00:23:52.954 [2024-06-11 13:09:11.646728] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:53.213 [2024-06-11 13:09:11.914439] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:54.145 ************************************ 00:23:54.145 END TEST raid5f_rebuild_test 00:23:54.145 ************************************ 00:23:54.145 13:09:12 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:54.145 00:23:54.145 real 0m20.880s 00:23:54.145 user 0m31.563s 00:23:54.145 sys 0m2.205s 00:23:54.145 13:09:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.145 13:09:12 -- common/autotest_common.sh@10 -- # set +x 00:23:54.145 13:09:12 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:23:54.146 13:09:12 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:54.146 13:09:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:54.146 13:09:12 -- common/autotest_common.sh@10 -- # set +x 00:23:54.404 ************************************ 00:23:54.404 START TEST raid5f_rebuild_test_sb 00:23:54.404 ************************************ 00:23:54.404 13:09:12 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@544 -- # raid_pid=132556 00:23:54.404 13:09:13 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132556 /var/tmp/spdk-raid.sock 00:23:54.404 13:09:12 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:54.404 13:09:13 -- common/autotest_common.sh@819 -- # '[' -z 132556 ']' 00:23:54.404 13:09:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:54.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:54.404 13:09:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:54.404 13:09:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:54.404 13:09:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:54.404 13:09:13 -- common/autotest_common.sh@10 -- # set +x 00:23:54.405 [2024-06-11 13:09:13.059877] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:54.405 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:54.405 Zero copy mechanism will not be used. 00:23:54.405 [2024-06-11 13:09:13.060048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132556 ] 00:23:54.405 [2024-06-11 13:09:13.216732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.663 [2024-06-11 13:09:13.397357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.921 [2024-06-11 13:09:13.582199] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:55.180 13:09:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:55.180 13:09:14 -- common/autotest_common.sh@852 -- # return 0 00:23:55.180 13:09:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:55.180 13:09:14 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:55.180 13:09:14 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:55.438 BaseBdev1_malloc 00:23:55.438 13:09:14 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:55.697 [2024-06-11 13:09:14.407602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:55.697 [2024-06-11 13:09:14.407713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.697 [2024-06-11 13:09:14.407754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:55.697 [2024-06-11 13:09:14.407805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.697 [2024-06-11 13:09:14.410158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.697 [2024-06-11 13:09:14.410208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:55.697 BaseBdev1 00:23:55.697 13:09:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:55.697 13:09:14 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:55.697 13:09:14 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:55.956 BaseBdev2_malloc 00:23:55.956 13:09:14 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:56.215 [2024-06-11 13:09:14.922787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:56.215 [2024-06-11 13:09:14.922875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.215 [2024-06-11 13:09:14.922921] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:56.215 [2024-06-11 13:09:14.922996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.215 [2024-06-11 13:09:14.925426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.215 [2024-06-11 13:09:14.925486] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:56.215 BaseBdev2 00:23:56.215 13:09:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:56.215 13:09:14 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:56.215 13:09:14 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:56.474 BaseBdev3_malloc 00:23:56.474 13:09:15 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:56.733 [2024-06-11 13:09:15.336863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:56.733 [2024-06-11 13:09:15.336952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.733 [2024-06-11 13:09:15.336994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:56.733 [2024-06-11 13:09:15.337041] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.733 [2024-06-11 13:09:15.339245] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.733 [2024-06-11 13:09:15.339301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:56.733 BaseBdev3 00:23:56.733 13:09:15 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:56.733 spare_malloc 00:23:56.991 13:09:15 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:56.991 spare_delay 00:23:56.991 13:09:15 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:57.249 [2024-06-11 13:09:15.954255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:57.249 [2024-06-11 13:09:15.954339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.249 [2024-06-11 13:09:15.954374] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:57.249 [2024-06-11 13:09:15.954423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.249 [2024-06-11 13:09:15.956738] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.249 [2024-06-11 13:09:15.956794] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:57.249 spare 00:23:57.249 13:09:15 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:57.508 [2024-06-11 13:09:16.190400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:57.508 [2024-06-11 13:09:16.192536] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:57.508 [2024-06-11 13:09:16.192638] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:57.508 [2024-06-11 13:09:16.192934] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:57.508 [2024-06-11 13:09:16.192959] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:57.508 [2024-06-11 13:09:16.193079] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:57.508 [2024-06-11 13:09:16.197609] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:57.508 [2024-06-11 13:09:16.197634] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:57.508 [2024-06-11 13:09:16.197816] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.508 13:09:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.766 13:09:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.766 "name": "raid_bdev1", 00:23:57.766 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:23:57.766 "strip_size_kb": 64, 00:23:57.766 "state": "online", 00:23:57.766 "raid_level": "raid5f", 00:23:57.766 "superblock": true, 00:23:57.766 "num_base_bdevs": 3, 00:23:57.766 "num_base_bdevs_discovered": 3, 00:23:57.766 "num_base_bdevs_operational": 3, 00:23:57.766 "base_bdevs_list": [ 00:23:57.766 { 00:23:57.766 "name": "BaseBdev1", 00:23:57.766 "uuid": "67ce194d-5d58-5162-8b9d-d7fbb823c589", 00:23:57.766 "is_configured": true, 00:23:57.766 "data_offset": 2048, 00:23:57.766 "data_size": 63488 00:23:57.766 }, 00:23:57.766 { 00:23:57.766 "name": "BaseBdev2", 00:23:57.766 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:23:57.766 "is_configured": true, 00:23:57.766 "data_offset": 2048, 00:23:57.766 "data_size": 63488 00:23:57.766 }, 00:23:57.766 { 00:23:57.766 "name": "BaseBdev3", 00:23:57.766 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:23:57.766 "is_configured": true, 00:23:57.766 "data_offset": 2048, 00:23:57.766 "data_size": 63488 00:23:57.766 } 00:23:57.766 ] 00:23:57.766 }' 00:23:57.766 13:09:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.766 13:09:16 -- common/autotest_common.sh@10 -- # set +x 00:23:58.331 13:09:17 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:58.331 13:09:17 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:58.589 [2024-06-11 13:09:17.319430] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.589 13:09:17 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:23:58.589 13:09:17 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.589 13:09:17 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:58.847 13:09:17 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:58.847 13:09:17 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:58.847 13:09:17 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:58.847 13:09:17 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:58.847 13:09:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.847 13:09:17 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:58.848 13:09:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:58.848 13:09:17 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:58.848 13:09:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:58.848 13:09:17 -- bdev/nbd_common.sh@12 -- # local i 00:23:58.848 13:09:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:58.848 13:09:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:58.848 13:09:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:59.106 [2024-06-11 13:09:17.771391] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:59.106 /dev/nbd0 00:23:59.106 13:09:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:59.106 13:09:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:59.106 13:09:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:59.106 13:09:17 -- common/autotest_common.sh@857 -- # local i 00:23:59.106 13:09:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:59.106 13:09:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:59.106 13:09:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:59.106 13:09:17 -- common/autotest_common.sh@861 -- # break 00:23:59.106 13:09:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:59.106 13:09:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:59.106 13:09:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.106 1+0 records in 00:23:59.106 1+0 records out 00:23:59.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836709 s, 4.9 MB/s 00:23:59.106 13:09:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.106 13:09:17 -- common/autotest_common.sh@874 -- # size=4096 00:23:59.106 13:09:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.106 13:09:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:59.106 13:09:17 -- common/autotest_common.sh@877 -- # return 0 00:23:59.106 13:09:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.106 13:09:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:59.106 13:09:17 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:59.106 13:09:17 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:59.106 13:09:17 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:59.106 13:09:17 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:23:59.672 496+0 records in 00:23:59.672 496+0 records out 00:23:59.672 65011712 bytes (65 MB, 62 MiB) copied, 0.400274 s, 162 MB/s 00:23:59.672 13:09:18 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@51 -- # local i 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:59.672 13:09:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:59.672 [2024-06-11 13:09:18.448180] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.930 13:09:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:59.930 13:09:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.930 13:09:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:59.930 13:09:18 -- bdev/nbd_common.sh@41 -- # break 00:23:59.930 13:09:18 -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.930 13:09:18 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:00.189 [2024-06-11 13:09:18.801635] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.189 13:09:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.447 13:09:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.447 "name": "raid_bdev1", 00:24:00.447 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:00.447 "strip_size_kb": 64, 00:24:00.447 "state": "online", 00:24:00.447 "raid_level": "raid5f", 00:24:00.447 "superblock": true, 00:24:00.447 "num_base_bdevs": 3, 00:24:00.447 "num_base_bdevs_discovered": 2, 00:24:00.447 "num_base_bdevs_operational": 2, 00:24:00.447 "base_bdevs_list": [ 00:24:00.447 { 00:24:00.447 "name": null, 00:24:00.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.447 "is_configured": false, 00:24:00.447 "data_offset": 2048, 00:24:00.447 "data_size": 63488 00:24:00.447 }, 00:24:00.447 { 00:24:00.447 "name": "BaseBdev2", 00:24:00.447 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:00.447 "is_configured": true, 00:24:00.447 "data_offset": 2048, 00:24:00.447 "data_size": 63488 00:24:00.447 }, 00:24:00.447 { 00:24:00.447 "name": "BaseBdev3", 00:24:00.447 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:00.447 "is_configured": true, 00:24:00.447 "data_offset": 2048, 00:24:00.447 "data_size": 63488 00:24:00.447 } 00:24:00.447 ] 00:24:00.447 }' 00:24:00.447 13:09:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.447 13:09:19 -- common/autotest_common.sh@10 -- # set +x 00:24:01.015 13:09:19 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:01.273 [2024-06-11 13:09:19.925934] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:01.273 [2024-06-11 13:09:19.926002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.273 [2024-06-11 13:09:19.937233] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002acc0 00:24:01.273 [2024-06-11 13:09:19.942957] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:01.273 13:09:19 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:02.209 13:09:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.209 13:09:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.209 13:09:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:02.209 13:09:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:02.209 13:09:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.209 13:09:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.209 13:09:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.467 13:09:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:02.467 "name": "raid_bdev1", 00:24:02.467 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:02.467 "strip_size_kb": 64, 00:24:02.467 "state": "online", 00:24:02.467 "raid_level": "raid5f", 00:24:02.467 "superblock": true, 00:24:02.467 "num_base_bdevs": 3, 00:24:02.467 "num_base_bdevs_discovered": 3, 00:24:02.467 "num_base_bdevs_operational": 3, 00:24:02.467 "process": { 00:24:02.467 "type": "rebuild", 00:24:02.467 "target": "spare", 00:24:02.467 "progress": { 00:24:02.467 "blocks": 24576, 00:24:02.467 "percent": 19 00:24:02.467 } 00:24:02.467 }, 00:24:02.467 "base_bdevs_list": [ 00:24:02.467 { 00:24:02.467 "name": "spare", 00:24:02.467 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:02.467 "is_configured": true, 00:24:02.467 "data_offset": 2048, 00:24:02.467 "data_size": 63488 00:24:02.467 }, 00:24:02.467 { 00:24:02.467 "name": "BaseBdev2", 00:24:02.467 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:02.467 "is_configured": true, 00:24:02.467 "data_offset": 2048, 00:24:02.467 "data_size": 63488 00:24:02.467 }, 00:24:02.467 { 00:24:02.467 "name": "BaseBdev3", 00:24:02.467 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:02.467 "is_configured": true, 00:24:02.467 "data_offset": 2048, 00:24:02.467 "data_size": 63488 00:24:02.467 } 00:24:02.467 ] 00:24:02.467 }' 00:24:02.467 13:09:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.467 13:09:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.467 13:09:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.467 13:09:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.467 13:09:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:02.725 [2024-06-11 13:09:21.556321] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:02.725 [2024-06-11 13:09:21.557419] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:02.725 [2024-06-11 13:09:21.557517] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.983 13:09:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.241 13:09:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.241 "name": "raid_bdev1", 00:24:03.241 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:03.241 "strip_size_kb": 64, 00:24:03.241 "state": "online", 00:24:03.242 "raid_level": "raid5f", 00:24:03.242 "superblock": true, 00:24:03.242 "num_base_bdevs": 3, 00:24:03.242 "num_base_bdevs_discovered": 2, 00:24:03.242 "num_base_bdevs_operational": 2, 00:24:03.242 "base_bdevs_list": [ 00:24:03.242 { 00:24:03.242 "name": null, 00:24:03.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.242 "is_configured": false, 00:24:03.242 "data_offset": 2048, 00:24:03.242 "data_size": 63488 00:24:03.242 }, 00:24:03.242 { 00:24:03.242 "name": "BaseBdev2", 00:24:03.242 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:03.242 "is_configured": true, 00:24:03.242 "data_offset": 2048, 00:24:03.242 "data_size": 63488 00:24:03.242 }, 00:24:03.242 { 00:24:03.242 "name": "BaseBdev3", 00:24:03.242 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:03.242 "is_configured": true, 00:24:03.242 "data_offset": 2048, 00:24:03.242 "data_size": 63488 00:24:03.242 } 00:24:03.242 ] 00:24:03.242 }' 00:24:03.242 13:09:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.242 13:09:21 -- common/autotest_common.sh@10 -- # set +x 00:24:03.809 13:09:22 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:03.809 13:09:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.809 13:09:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:03.809 13:09:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:03.809 13:09:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.809 13:09:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.809 13:09:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.068 13:09:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.068 "name": "raid_bdev1", 00:24:04.068 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:04.068 "strip_size_kb": 64, 00:24:04.068 "state": "online", 00:24:04.068 "raid_level": "raid5f", 00:24:04.068 "superblock": true, 00:24:04.068 "num_base_bdevs": 3, 00:24:04.068 "num_base_bdevs_discovered": 2, 00:24:04.068 "num_base_bdevs_operational": 2, 00:24:04.068 "base_bdevs_list": [ 00:24:04.068 { 00:24:04.068 "name": null, 00:24:04.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.068 "is_configured": false, 00:24:04.068 "data_offset": 2048, 00:24:04.068 "data_size": 63488 00:24:04.068 }, 00:24:04.068 { 00:24:04.068 "name": "BaseBdev2", 00:24:04.068 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:04.068 "is_configured": true, 00:24:04.068 "data_offset": 2048, 00:24:04.068 "data_size": 63488 00:24:04.068 }, 00:24:04.068 { 00:24:04.068 "name": "BaseBdev3", 00:24:04.068 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:04.068 "is_configured": true, 00:24:04.068 "data_offset": 2048, 00:24:04.068 "data_size": 63488 00:24:04.068 } 00:24:04.068 ] 00:24:04.068 }' 00:24:04.068 13:09:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.068 13:09:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:04.068 13:09:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.068 13:09:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:04.068 13:09:22 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:04.327 [2024-06-11 13:09:23.119174] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:04.327 [2024-06-11 13:09:23.119237] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.327 [2024-06-11 13:09:23.129846] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:24:04.327 [2024-06-11 13:09:23.135659] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:04.327 13:09:23 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.703 "name": "raid_bdev1", 00:24:05.703 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:05.703 "strip_size_kb": 64, 00:24:05.703 "state": "online", 00:24:05.703 "raid_level": "raid5f", 00:24:05.703 "superblock": true, 00:24:05.703 "num_base_bdevs": 3, 00:24:05.703 "num_base_bdevs_discovered": 3, 00:24:05.703 "num_base_bdevs_operational": 3, 00:24:05.703 "process": { 00:24:05.703 "type": "rebuild", 00:24:05.703 "target": "spare", 00:24:05.703 "progress": { 00:24:05.703 "blocks": 24576, 00:24:05.703 "percent": 19 00:24:05.703 } 00:24:05.703 }, 00:24:05.703 "base_bdevs_list": [ 00:24:05.703 { 00:24:05.703 "name": "spare", 00:24:05.703 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:05.703 "is_configured": true, 00:24:05.703 "data_offset": 2048, 00:24:05.703 "data_size": 63488 00:24:05.703 }, 00:24:05.703 { 00:24:05.703 "name": "BaseBdev2", 00:24:05.703 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:05.703 "is_configured": true, 00:24:05.703 "data_offset": 2048, 00:24:05.703 "data_size": 63488 00:24:05.703 }, 00:24:05.703 { 00:24:05.703 "name": "BaseBdev3", 00:24:05.703 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:05.703 "is_configured": true, 00:24:05.703 "data_offset": 2048, 00:24:05.703 "data_size": 63488 00:24:05.703 } 00:24:05.703 ] 00:24:05.703 }' 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:05.703 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@657 -- # local timeout=636 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.703 13:09:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.962 13:09:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.962 "name": "raid_bdev1", 00:24:05.962 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:05.962 "strip_size_kb": 64, 00:24:05.962 "state": "online", 00:24:05.962 "raid_level": "raid5f", 00:24:05.962 "superblock": true, 00:24:05.962 "num_base_bdevs": 3, 00:24:05.962 "num_base_bdevs_discovered": 3, 00:24:05.962 "num_base_bdevs_operational": 3, 00:24:05.962 "process": { 00:24:05.962 "type": "rebuild", 00:24:05.962 "target": "spare", 00:24:05.962 "progress": { 00:24:05.962 "blocks": 30720, 00:24:05.962 "percent": 24 00:24:05.962 } 00:24:05.962 }, 00:24:05.962 "base_bdevs_list": [ 00:24:05.962 { 00:24:05.962 "name": "spare", 00:24:05.962 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:05.962 "is_configured": true, 00:24:05.962 "data_offset": 2048, 00:24:05.962 "data_size": 63488 00:24:05.962 }, 00:24:05.962 { 00:24:05.962 "name": "BaseBdev2", 00:24:05.962 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:05.962 "is_configured": true, 00:24:05.962 "data_offset": 2048, 00:24:05.962 "data_size": 63488 00:24:05.962 }, 00:24:05.962 { 00:24:05.962 "name": "BaseBdev3", 00:24:05.962 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:05.962 "is_configured": true, 00:24:05.962 "data_offset": 2048, 00:24:05.962 "data_size": 63488 00:24:05.962 } 00:24:05.962 ] 00:24:05.962 }' 00:24:05.962 13:09:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.962 13:09:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.962 13:09:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:06.221 13:09:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.221 13:09:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:07.177 13:09:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:07.177 13:09:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.177 13:09:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:07.177 13:09:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:07.177 13:09:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:07.177 13:09:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:07.177 13:09:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.177 13:09:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.434 13:09:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.434 "name": "raid_bdev1", 00:24:07.434 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:07.434 "strip_size_kb": 64, 00:24:07.434 "state": "online", 00:24:07.434 "raid_level": "raid5f", 00:24:07.434 "superblock": true, 00:24:07.434 "num_base_bdevs": 3, 00:24:07.434 "num_base_bdevs_discovered": 3, 00:24:07.434 "num_base_bdevs_operational": 3, 00:24:07.434 "process": { 00:24:07.434 "type": "rebuild", 00:24:07.434 "target": "spare", 00:24:07.434 "progress": { 00:24:07.434 "blocks": 59392, 00:24:07.434 "percent": 46 00:24:07.434 } 00:24:07.434 }, 00:24:07.434 "base_bdevs_list": [ 00:24:07.434 { 00:24:07.434 "name": "spare", 00:24:07.434 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:07.434 "is_configured": true, 00:24:07.434 "data_offset": 2048, 00:24:07.434 "data_size": 63488 00:24:07.434 }, 00:24:07.434 { 00:24:07.434 "name": "BaseBdev2", 00:24:07.434 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:07.435 "is_configured": true, 00:24:07.435 "data_offset": 2048, 00:24:07.435 "data_size": 63488 00:24:07.435 }, 00:24:07.435 { 00:24:07.435 "name": "BaseBdev3", 00:24:07.435 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:07.435 "is_configured": true, 00:24:07.435 "data_offset": 2048, 00:24:07.435 "data_size": 63488 00:24:07.435 } 00:24:07.435 ] 00:24:07.435 }' 00:24:07.435 13:09:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.435 13:09:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.435 13:09:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.435 13:09:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.435 13:09:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.810 "name": "raid_bdev1", 00:24:08.810 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:08.810 "strip_size_kb": 64, 00:24:08.810 "state": "online", 00:24:08.810 "raid_level": "raid5f", 00:24:08.810 "superblock": true, 00:24:08.810 "num_base_bdevs": 3, 00:24:08.810 "num_base_bdevs_discovered": 3, 00:24:08.810 "num_base_bdevs_operational": 3, 00:24:08.810 "process": { 00:24:08.810 "type": "rebuild", 00:24:08.810 "target": "spare", 00:24:08.810 "progress": { 00:24:08.810 "blocks": 86016, 00:24:08.810 "percent": 67 00:24:08.810 } 00:24:08.810 }, 00:24:08.810 "base_bdevs_list": [ 00:24:08.810 { 00:24:08.810 "name": "spare", 00:24:08.810 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:08.810 "is_configured": true, 00:24:08.810 "data_offset": 2048, 00:24:08.810 "data_size": 63488 00:24:08.810 }, 00:24:08.810 { 00:24:08.810 "name": "BaseBdev2", 00:24:08.810 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:08.810 "is_configured": true, 00:24:08.810 "data_offset": 2048, 00:24:08.810 "data_size": 63488 00:24:08.810 }, 00:24:08.810 { 00:24:08.810 "name": "BaseBdev3", 00:24:08.810 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:08.810 "is_configured": true, 00:24:08.810 "data_offset": 2048, 00:24:08.810 "data_size": 63488 00:24:08.810 } 00:24:08.810 ] 00:24:08.810 }' 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.810 13:09:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:09.747 13:09:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:09.747 13:09:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.747 13:09:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:09.747 13:09:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:09.747 13:09:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:09.747 13:09:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:09.747 13:09:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.747 13:09:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.006 13:09:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:10.006 "name": "raid_bdev1", 00:24:10.006 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:10.006 "strip_size_kb": 64, 00:24:10.006 "state": "online", 00:24:10.006 "raid_level": "raid5f", 00:24:10.006 "superblock": true, 00:24:10.006 "num_base_bdevs": 3, 00:24:10.006 "num_base_bdevs_discovered": 3, 00:24:10.006 "num_base_bdevs_operational": 3, 00:24:10.006 "process": { 00:24:10.006 "type": "rebuild", 00:24:10.006 "target": "spare", 00:24:10.006 "progress": { 00:24:10.006 "blocks": 112640, 00:24:10.006 "percent": 88 00:24:10.006 } 00:24:10.006 }, 00:24:10.006 "base_bdevs_list": [ 00:24:10.006 { 00:24:10.006 "name": "spare", 00:24:10.006 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:10.006 "is_configured": true, 00:24:10.006 "data_offset": 2048, 00:24:10.006 "data_size": 63488 00:24:10.006 }, 00:24:10.006 { 00:24:10.006 "name": "BaseBdev2", 00:24:10.006 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:10.006 "is_configured": true, 00:24:10.006 "data_offset": 2048, 00:24:10.006 "data_size": 63488 00:24:10.006 }, 00:24:10.006 { 00:24:10.006 "name": "BaseBdev3", 00:24:10.006 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:10.006 "is_configured": true, 00:24:10.006 "data_offset": 2048, 00:24:10.006 "data_size": 63488 00:24:10.006 } 00:24:10.006 ] 00:24:10.006 }' 00:24:10.006 13:09:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:10.265 13:09:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.265 13:09:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:10.265 13:09:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.265 13:09:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:10.833 [2024-06-11 13:09:29.394170] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:10.833 [2024-06-11 13:09:29.394279] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:10.833 [2024-06-11 13:09:29.394474] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.092 13:09:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:11.092 13:09:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.092 13:09:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:11.092 13:09:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:11.092 13:09:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:11.092 13:09:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:11.092 13:09:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.092 13:09:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.351 13:09:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:11.351 "name": "raid_bdev1", 00:24:11.351 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:11.351 "strip_size_kb": 64, 00:24:11.351 "state": "online", 00:24:11.351 "raid_level": "raid5f", 00:24:11.351 "superblock": true, 00:24:11.351 "num_base_bdevs": 3, 00:24:11.351 "num_base_bdevs_discovered": 3, 00:24:11.351 "num_base_bdevs_operational": 3, 00:24:11.351 "base_bdevs_list": [ 00:24:11.351 { 00:24:11.351 "name": "spare", 00:24:11.351 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:11.351 "is_configured": true, 00:24:11.351 "data_offset": 2048, 00:24:11.351 "data_size": 63488 00:24:11.351 }, 00:24:11.351 { 00:24:11.351 "name": "BaseBdev2", 00:24:11.351 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:11.351 "is_configured": true, 00:24:11.351 "data_offset": 2048, 00:24:11.351 "data_size": 63488 00:24:11.351 }, 00:24:11.351 { 00:24:11.351 "name": "BaseBdev3", 00:24:11.351 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:11.351 "is_configured": true, 00:24:11.351 "data_offset": 2048, 00:24:11.351 "data_size": 63488 00:24:11.351 } 00:24:11.351 ] 00:24:11.351 }' 00:24:11.351 13:09:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@660 -- # break 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.610 13:09:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:11.869 "name": "raid_bdev1", 00:24:11.869 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:11.869 "strip_size_kb": 64, 00:24:11.869 "state": "online", 00:24:11.869 "raid_level": "raid5f", 00:24:11.869 "superblock": true, 00:24:11.869 "num_base_bdevs": 3, 00:24:11.869 "num_base_bdevs_discovered": 3, 00:24:11.869 "num_base_bdevs_operational": 3, 00:24:11.869 "base_bdevs_list": [ 00:24:11.869 { 00:24:11.869 "name": "spare", 00:24:11.869 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:11.869 "is_configured": true, 00:24:11.869 "data_offset": 2048, 00:24:11.869 "data_size": 63488 00:24:11.869 }, 00:24:11.869 { 00:24:11.869 "name": "BaseBdev2", 00:24:11.869 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:11.869 "is_configured": true, 00:24:11.869 "data_offset": 2048, 00:24:11.869 "data_size": 63488 00:24:11.869 }, 00:24:11.869 { 00:24:11.869 "name": "BaseBdev3", 00:24:11.869 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:11.869 "is_configured": true, 00:24:11.869 "data_offset": 2048, 00:24:11.869 "data_size": 63488 00:24:11.869 } 00:24:11.869 ] 00:24:11.869 }' 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.869 13:09:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.137 13:09:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.137 "name": "raid_bdev1", 00:24:12.137 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:12.137 "strip_size_kb": 64, 00:24:12.137 "state": "online", 00:24:12.137 "raid_level": "raid5f", 00:24:12.137 "superblock": true, 00:24:12.137 "num_base_bdevs": 3, 00:24:12.137 "num_base_bdevs_discovered": 3, 00:24:12.137 "num_base_bdevs_operational": 3, 00:24:12.137 "base_bdevs_list": [ 00:24:12.137 { 00:24:12.137 "name": "spare", 00:24:12.137 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:12.137 "is_configured": true, 00:24:12.137 "data_offset": 2048, 00:24:12.137 "data_size": 63488 00:24:12.137 }, 00:24:12.137 { 00:24:12.137 "name": "BaseBdev2", 00:24:12.137 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:12.137 "is_configured": true, 00:24:12.137 "data_offset": 2048, 00:24:12.137 "data_size": 63488 00:24:12.137 }, 00:24:12.137 { 00:24:12.137 "name": "BaseBdev3", 00:24:12.137 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:12.137 "is_configured": true, 00:24:12.137 "data_offset": 2048, 00:24:12.137 "data_size": 63488 00:24:12.137 } 00:24:12.137 ] 00:24:12.137 }' 00:24:12.137 13:09:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.137 13:09:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.758 13:09:31 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:13.016 [2024-06-11 13:09:31.723965] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:13.016 [2024-06-11 13:09:31.724017] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:13.016 [2024-06-11 13:09:31.724127] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:13.016 [2024-06-11 13:09:31.724241] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:13.016 [2024-06-11 13:09:31.724256] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:24:13.016 13:09:31 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:13.016 13:09:31 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.273 13:09:31 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:13.273 13:09:31 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:13.273 13:09:31 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@12 -- # local i 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:13.273 13:09:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:13.530 /dev/nbd0 00:24:13.530 13:09:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:13.530 13:09:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:13.530 13:09:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:13.530 13:09:32 -- common/autotest_common.sh@857 -- # local i 00:24:13.530 13:09:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:13.530 13:09:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:13.530 13:09:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:13.530 13:09:32 -- common/autotest_common.sh@861 -- # break 00:24:13.530 13:09:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:13.530 13:09:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:13.530 13:09:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:13.530 1+0 records in 00:24:13.530 1+0 records out 00:24:13.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174413 s, 23.5 MB/s 00:24:13.530 13:09:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.530 13:09:32 -- common/autotest_common.sh@874 -- # size=4096 00:24:13.530 13:09:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.530 13:09:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:13.530 13:09:32 -- common/autotest_common.sh@877 -- # return 0 00:24:13.530 13:09:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:13.530 13:09:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:13.530 13:09:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:13.787 /dev/nbd1 00:24:13.787 13:09:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:13.787 13:09:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:13.787 13:09:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:13.787 13:09:32 -- common/autotest_common.sh@857 -- # local i 00:24:13.787 13:09:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:13.787 13:09:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:13.787 13:09:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:13.787 13:09:32 -- common/autotest_common.sh@861 -- # break 00:24:13.787 13:09:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:13.787 13:09:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:13.787 13:09:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:13.787 1+0 records in 00:24:13.787 1+0 records out 00:24:13.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347975 s, 11.8 MB/s 00:24:13.787 13:09:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.787 13:09:32 -- common/autotest_common.sh@874 -- # size=4096 00:24:13.787 13:09:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.787 13:09:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:13.787 13:09:32 -- common/autotest_common.sh@877 -- # return 0 00:24:13.787 13:09:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:13.787 13:09:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:13.787 13:09:32 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:14.045 13:09:32 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:14.045 13:09:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:14.045 13:09:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:14.045 13:09:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:14.045 13:09:32 -- bdev/nbd_common.sh@51 -- # local i 00:24:14.045 13:09:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:14.045 13:09:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:14.304 13:09:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:14.304 13:09:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:14.304 13:09:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:14.304 13:09:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:14.304 13:09:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.304 13:09:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:14.304 13:09:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:14.304 13:09:33 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:14.304 13:09:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.304 13:09:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:14.304 13:09:33 -- bdev/nbd_common.sh@41 -- # break 00:24:14.304 13:09:33 -- bdev/nbd_common.sh@45 -- # return 0 00:24:14.304 13:09:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:14.304 13:09:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@41 -- # break 00:24:14.562 13:09:33 -- bdev/nbd_common.sh@45 -- # return 0 00:24:14.562 13:09:33 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:14.562 13:09:33 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:14.562 13:09:33 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:14.562 13:09:33 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:14.821 13:09:33 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:15.079 [2024-06-11 13:09:33.749502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:15.079 [2024-06-11 13:09:33.749623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.079 [2024-06-11 13:09:33.749663] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:15.079 [2024-06-11 13:09:33.749697] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.080 [2024-06-11 13:09:33.752306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.080 [2024-06-11 13:09:33.752412] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:15.080 [2024-06-11 13:09:33.752535] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:15.080 [2024-06-11 13:09:33.752609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.080 BaseBdev1 00:24:15.080 13:09:33 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:15.080 13:09:33 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:15.080 13:09:33 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:15.338 13:09:34 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:15.338 [2024-06-11 13:09:34.173701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:15.338 [2024-06-11 13:09:34.173802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.338 [2024-06-11 13:09:34.173852] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:15.338 [2024-06-11 13:09:34.173878] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.338 [2024-06-11 13:09:34.174431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.338 [2024-06-11 13:09:34.174492] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:15.338 [2024-06-11 13:09:34.174604] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:15.338 [2024-06-11 13:09:34.174622] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:15.338 [2024-06-11 13:09:34.174629] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:15.338 [2024-06-11 13:09:34.174651] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:24:15.338 [2024-06-11 13:09:34.174720] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:15.338 BaseBdev2 00:24:15.596 13:09:34 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:15.596 13:09:34 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:15.596 13:09:34 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:15.596 13:09:34 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:15.855 [2024-06-11 13:09:34.593729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:15.855 [2024-06-11 13:09:34.593806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.855 [2024-06-11 13:09:34.593849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:15.855 [2024-06-11 13:09:34.593873] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.855 [2024-06-11 13:09:34.594314] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.855 [2024-06-11 13:09:34.594368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:15.855 [2024-06-11 13:09:34.594451] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:15.855 [2024-06-11 13:09:34.594476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:15.855 BaseBdev3 00:24:15.855 13:09:34 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:16.114 13:09:34 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:16.374 [2024-06-11 13:09:34.987247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:16.374 [2024-06-11 13:09:34.987337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:16.374 [2024-06-11 13:09:34.987377] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:16.374 [2024-06-11 13:09:34.987412] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:16.374 [2024-06-11 13:09:34.987893] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:16.374 [2024-06-11 13:09:34.987956] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:16.374 [2024-06-11 13:09:34.988059] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:16.374 [2024-06-11 13:09:34.988087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:16.374 spare 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.374 13:09:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.374 [2024-06-11 13:09:35.088193] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:24:16.374 [2024-06-11 13:09:35.088218] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:16.374 [2024-06-11 13:09:35.088353] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bb40 00:24:16.374 [2024-06-11 13:09:35.092632] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:24:16.374 [2024-06-11 13:09:35.092658] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:24:16.374 [2024-06-11 13:09:35.092806] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.633 13:09:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.633 "name": "raid_bdev1", 00:24:16.633 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:16.633 "strip_size_kb": 64, 00:24:16.633 "state": "online", 00:24:16.633 "raid_level": "raid5f", 00:24:16.633 "superblock": true, 00:24:16.633 "num_base_bdevs": 3, 00:24:16.633 "num_base_bdevs_discovered": 3, 00:24:16.633 "num_base_bdevs_operational": 3, 00:24:16.633 "base_bdevs_list": [ 00:24:16.633 { 00:24:16.633 "name": "spare", 00:24:16.633 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:16.633 "is_configured": true, 00:24:16.633 "data_offset": 2048, 00:24:16.633 "data_size": 63488 00:24:16.633 }, 00:24:16.633 { 00:24:16.633 "name": "BaseBdev2", 00:24:16.633 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:16.633 "is_configured": true, 00:24:16.633 "data_offset": 2048, 00:24:16.633 "data_size": 63488 00:24:16.633 }, 00:24:16.633 { 00:24:16.633 "name": "BaseBdev3", 00:24:16.633 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:16.633 "is_configured": true, 00:24:16.633 "data_offset": 2048, 00:24:16.633 "data_size": 63488 00:24:16.633 } 00:24:16.633 ] 00:24:16.633 }' 00:24:16.633 13:09:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.633 13:09:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.200 13:09:35 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.200 13:09:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.200 13:09:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:17.200 13:09:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:17.200 13:09:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.200 13:09:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.200 13:09:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.459 13:09:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.459 "name": "raid_bdev1", 00:24:17.459 "uuid": "8211024c-11c0-46db-a242-39067acd3ed8", 00:24:17.459 "strip_size_kb": 64, 00:24:17.459 "state": "online", 00:24:17.459 "raid_level": "raid5f", 00:24:17.459 "superblock": true, 00:24:17.459 "num_base_bdevs": 3, 00:24:17.459 "num_base_bdevs_discovered": 3, 00:24:17.459 "num_base_bdevs_operational": 3, 00:24:17.459 "base_bdevs_list": [ 00:24:17.459 { 00:24:17.459 "name": "spare", 00:24:17.459 "uuid": "90c928e0-508e-5eb8-9ef0-fd443a8571fb", 00:24:17.459 "is_configured": true, 00:24:17.459 "data_offset": 2048, 00:24:17.459 "data_size": 63488 00:24:17.459 }, 00:24:17.459 { 00:24:17.459 "name": "BaseBdev2", 00:24:17.459 "uuid": "e5ece2e8-edff-5b43-a052-4f5790718099", 00:24:17.459 "is_configured": true, 00:24:17.459 "data_offset": 2048, 00:24:17.459 "data_size": 63488 00:24:17.459 }, 00:24:17.459 { 00:24:17.459 "name": "BaseBdev3", 00:24:17.459 "uuid": "2aad348b-1bc8-5657-83c6-bccc6ceff2a2", 00:24:17.459 "is_configured": true, 00:24:17.459 "data_offset": 2048, 00:24:17.459 "data_size": 63488 00:24:17.459 } 00:24:17.459 ] 00:24:17.459 }' 00:24:17.459 13:09:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.459 13:09:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.459 13:09:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.459 13:09:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:17.459 13:09:36 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.459 13:09:36 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:17.718 13:09:36 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.718 13:09:36 -- bdev/bdev_raid.sh@709 -- # killprocess 132556 00:24:17.718 13:09:36 -- common/autotest_common.sh@926 -- # '[' -z 132556 ']' 00:24:17.718 13:09:36 -- common/autotest_common.sh@930 -- # kill -0 132556 00:24:17.718 13:09:36 -- common/autotest_common.sh@931 -- # uname 00:24:17.718 13:09:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:17.718 13:09:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132556 00:24:17.718 killing process with pid 132556 00:24:17.718 Received shutdown signal, test time was about 60.000000 seconds 00:24:17.718 00:24:17.718 Latency(us) 00:24:17.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.718 =================================================================================================================== 00:24:17.718 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:17.718 13:09:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:17.718 13:09:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:17.718 13:09:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132556' 00:24:17.718 13:09:36 -- common/autotest_common.sh@945 -- # kill 132556 00:24:17.718 13:09:36 -- common/autotest_common.sh@950 -- # wait 132556 00:24:17.718 [2024-06-11 13:09:36.533983] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:17.718 [2024-06-11 13:09:36.534097] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:17.718 [2024-06-11 13:09:36.534200] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:17.718 [2024-06-11 13:09:36.534234] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:24:17.977 [2024-06-11 13:09:36.804285] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:19.354 ************************************ 00:24:19.354 END TEST raid5f_rebuild_test_sb 00:24:19.354 ************************************ 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:19.354 00:24:19.354 real 0m24.844s 00:24:19.354 user 0m39.190s 00:24:19.354 sys 0m2.666s 00:24:19.354 13:09:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.354 13:09:37 -- common/autotest_common.sh@10 -- # set +x 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:19.354 13:09:37 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:19.354 13:09:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:19.354 13:09:37 -- common/autotest_common.sh@10 -- # set +x 00:24:19.354 ************************************ 00:24:19.354 START TEST raid5f_state_function_test 00:24:19.354 ************************************ 00:24:19.354 13:09:37 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=133240 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133240' 00:24:19.354 Process raid pid: 133240 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133240 /var/tmp/spdk-raid.sock 00:24:19.354 13:09:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:19.354 13:09:37 -- common/autotest_common.sh@819 -- # '[' -z 133240 ']' 00:24:19.354 13:09:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:19.354 13:09:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:19.354 13:09:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:19.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:19.354 13:09:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:19.354 13:09:37 -- common/autotest_common.sh@10 -- # set +x 00:24:19.354 [2024-06-11 13:09:37.953533] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:19.354 [2024-06-11 13:09:37.953718] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.354 [2024-06-11 13:09:38.109714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.613 [2024-06-11 13:09:38.300399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.872 [2024-06-11 13:09:38.490302] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:20.130 13:09:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:20.130 13:09:38 -- common/autotest_common.sh@852 -- # return 0 00:24:20.130 13:09:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:20.389 [2024-06-11 13:09:39.078584] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:20.389 [2024-06-11 13:09:39.078682] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:20.389 [2024-06-11 13:09:39.078696] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:20.389 [2024-06-11 13:09:39.078723] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:20.389 [2024-06-11 13:09:39.078731] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:20.389 [2024-06-11 13:09:39.078773] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:20.389 [2024-06-11 13:09:39.078784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:20.389 [2024-06-11 13:09:39.078809] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.389 13:09:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.648 13:09:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.648 "name": "Existed_Raid", 00:24:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.648 "strip_size_kb": 64, 00:24:20.648 "state": "configuring", 00:24:20.648 "raid_level": "raid5f", 00:24:20.648 "superblock": false, 00:24:20.648 "num_base_bdevs": 4, 00:24:20.648 "num_base_bdevs_discovered": 0, 00:24:20.648 "num_base_bdevs_operational": 4, 00:24:20.648 "base_bdevs_list": [ 00:24:20.648 { 00:24:20.648 "name": "BaseBdev1", 00:24:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.648 "is_configured": false, 00:24:20.648 "data_offset": 0, 00:24:20.648 "data_size": 0 00:24:20.648 }, 00:24:20.648 { 00:24:20.648 "name": "BaseBdev2", 00:24:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.648 "is_configured": false, 00:24:20.648 "data_offset": 0, 00:24:20.648 "data_size": 0 00:24:20.648 }, 00:24:20.648 { 00:24:20.648 "name": "BaseBdev3", 00:24:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.648 "is_configured": false, 00:24:20.648 "data_offset": 0, 00:24:20.648 "data_size": 0 00:24:20.648 }, 00:24:20.648 { 00:24:20.648 "name": "BaseBdev4", 00:24:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.648 "is_configured": false, 00:24:20.648 "data_offset": 0, 00:24:20.648 "data_size": 0 00:24:20.648 } 00:24:20.648 ] 00:24:20.648 }' 00:24:20.648 13:09:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.648 13:09:39 -- common/autotest_common.sh@10 -- # set +x 00:24:21.215 13:09:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:21.474 [2024-06-11 13:09:40.150632] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:21.474 [2024-06-11 13:09:40.150663] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:21.474 13:09:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:21.733 [2024-06-11 13:09:40.406708] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:21.733 [2024-06-11 13:09:40.406758] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:21.733 [2024-06-11 13:09:40.406770] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:21.733 [2024-06-11 13:09:40.406804] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:21.733 [2024-06-11 13:09:40.406814] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:21.733 [2024-06-11 13:09:40.406851] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:21.733 [2024-06-11 13:09:40.406860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:21.733 [2024-06-11 13:09:40.406885] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:21.733 13:09:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:21.992 [2024-06-11 13:09:40.624069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.992 BaseBdev1 00:24:21.992 13:09:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:21.992 13:09:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:21.992 13:09:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:21.992 13:09:40 -- common/autotest_common.sh@889 -- # local i 00:24:21.992 13:09:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:21.992 13:09:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:21.992 13:09:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:22.250 13:09:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:22.250 [ 00:24:22.250 { 00:24:22.250 "name": "BaseBdev1", 00:24:22.250 "aliases": [ 00:24:22.250 "0a692dc9-f697-4b40-8d8d-de924da05f20" 00:24:22.250 ], 00:24:22.250 "product_name": "Malloc disk", 00:24:22.250 "block_size": 512, 00:24:22.250 "num_blocks": 65536, 00:24:22.250 "uuid": "0a692dc9-f697-4b40-8d8d-de924da05f20", 00:24:22.250 "assigned_rate_limits": { 00:24:22.250 "rw_ios_per_sec": 0, 00:24:22.250 "rw_mbytes_per_sec": 0, 00:24:22.250 "r_mbytes_per_sec": 0, 00:24:22.250 "w_mbytes_per_sec": 0 00:24:22.250 }, 00:24:22.250 "claimed": true, 00:24:22.250 "claim_type": "exclusive_write", 00:24:22.250 "zoned": false, 00:24:22.250 "supported_io_types": { 00:24:22.250 "read": true, 00:24:22.250 "write": true, 00:24:22.250 "unmap": true, 00:24:22.250 "write_zeroes": true, 00:24:22.250 "flush": true, 00:24:22.250 "reset": true, 00:24:22.250 "compare": false, 00:24:22.250 "compare_and_write": false, 00:24:22.250 "abort": true, 00:24:22.250 "nvme_admin": false, 00:24:22.250 "nvme_io": false 00:24:22.250 }, 00:24:22.250 "memory_domains": [ 00:24:22.250 { 00:24:22.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.250 "dma_device_type": 2 00:24:22.250 } 00:24:22.250 ], 00:24:22.250 "driver_specific": {} 00:24:22.250 } 00:24:22.250 ] 00:24:22.508 13:09:41 -- common/autotest_common.sh@895 -- # return 0 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:22.508 "name": "Existed_Raid", 00:24:22.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.508 "strip_size_kb": 64, 00:24:22.508 "state": "configuring", 00:24:22.508 "raid_level": "raid5f", 00:24:22.508 "superblock": false, 00:24:22.508 "num_base_bdevs": 4, 00:24:22.508 "num_base_bdevs_discovered": 1, 00:24:22.508 "num_base_bdevs_operational": 4, 00:24:22.508 "base_bdevs_list": [ 00:24:22.508 { 00:24:22.508 "name": "BaseBdev1", 00:24:22.508 "uuid": "0a692dc9-f697-4b40-8d8d-de924da05f20", 00:24:22.508 "is_configured": true, 00:24:22.508 "data_offset": 0, 00:24:22.508 "data_size": 65536 00:24:22.508 }, 00:24:22.508 { 00:24:22.508 "name": "BaseBdev2", 00:24:22.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.508 "is_configured": false, 00:24:22.508 "data_offset": 0, 00:24:22.508 "data_size": 0 00:24:22.508 }, 00:24:22.508 { 00:24:22.508 "name": "BaseBdev3", 00:24:22.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.508 "is_configured": false, 00:24:22.508 "data_offset": 0, 00:24:22.508 "data_size": 0 00:24:22.508 }, 00:24:22.508 { 00:24:22.508 "name": "BaseBdev4", 00:24:22.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.508 "is_configured": false, 00:24:22.508 "data_offset": 0, 00:24:22.508 "data_size": 0 00:24:22.508 } 00:24:22.508 ] 00:24:22.508 }' 00:24:22.508 13:09:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:22.509 13:09:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.084 13:09:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:23.342 [2024-06-11 13:09:42.156397] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:23.342 [2024-06-11 13:09:42.156663] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:23.342 13:09:42 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:23.342 13:09:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:23.600 [2024-06-11 13:09:42.416517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:23.600 [2024-06-11 13:09:42.418880] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:23.600 [2024-06-11 13:09:42.419130] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:23.600 [2024-06-11 13:09:42.419260] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:23.600 [2024-06-11 13:09:42.419349] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:23.600 [2024-06-11 13:09:42.419460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:23.600 [2024-06-11 13:09:42.419521] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.600 13:09:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.857 13:09:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.857 "name": "Existed_Raid", 00:24:23.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.857 "strip_size_kb": 64, 00:24:23.857 "state": "configuring", 00:24:23.857 "raid_level": "raid5f", 00:24:23.857 "superblock": false, 00:24:23.857 "num_base_bdevs": 4, 00:24:23.857 "num_base_bdevs_discovered": 1, 00:24:23.857 "num_base_bdevs_operational": 4, 00:24:23.857 "base_bdevs_list": [ 00:24:23.857 { 00:24:23.857 "name": "BaseBdev1", 00:24:23.857 "uuid": "0a692dc9-f697-4b40-8d8d-de924da05f20", 00:24:23.857 "is_configured": true, 00:24:23.857 "data_offset": 0, 00:24:23.857 "data_size": 65536 00:24:23.857 }, 00:24:23.857 { 00:24:23.857 "name": "BaseBdev2", 00:24:23.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.857 "is_configured": false, 00:24:23.857 "data_offset": 0, 00:24:23.857 "data_size": 0 00:24:23.857 }, 00:24:23.857 { 00:24:23.857 "name": "BaseBdev3", 00:24:23.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.857 "is_configured": false, 00:24:23.857 "data_offset": 0, 00:24:23.857 "data_size": 0 00:24:23.857 }, 00:24:23.857 { 00:24:23.857 "name": "BaseBdev4", 00:24:23.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.857 "is_configured": false, 00:24:23.857 "data_offset": 0, 00:24:23.857 "data_size": 0 00:24:23.857 } 00:24:23.857 ] 00:24:23.857 }' 00:24:23.857 13:09:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.857 13:09:42 -- common/autotest_common.sh@10 -- # set +x 00:24:24.789 13:09:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:24.789 [2024-06-11 13:09:43.604933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:24.789 BaseBdev2 00:24:24.789 13:09:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:24.789 13:09:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:24.789 13:09:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:24.789 13:09:43 -- common/autotest_common.sh@889 -- # local i 00:24:24.789 13:09:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:24.789 13:09:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:24.789 13:09:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:25.047 13:09:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:25.306 [ 00:24:25.306 { 00:24:25.306 "name": "BaseBdev2", 00:24:25.306 "aliases": [ 00:24:25.306 "b3ad1d87-deb9-4cf0-afd0-54143ecb21d5" 00:24:25.306 ], 00:24:25.306 "product_name": "Malloc disk", 00:24:25.306 "block_size": 512, 00:24:25.306 "num_blocks": 65536, 00:24:25.306 "uuid": "b3ad1d87-deb9-4cf0-afd0-54143ecb21d5", 00:24:25.306 "assigned_rate_limits": { 00:24:25.306 "rw_ios_per_sec": 0, 00:24:25.306 "rw_mbytes_per_sec": 0, 00:24:25.306 "r_mbytes_per_sec": 0, 00:24:25.306 "w_mbytes_per_sec": 0 00:24:25.306 }, 00:24:25.306 "claimed": true, 00:24:25.306 "claim_type": "exclusive_write", 00:24:25.306 "zoned": false, 00:24:25.306 "supported_io_types": { 00:24:25.306 "read": true, 00:24:25.306 "write": true, 00:24:25.306 "unmap": true, 00:24:25.306 "write_zeroes": true, 00:24:25.306 "flush": true, 00:24:25.306 "reset": true, 00:24:25.306 "compare": false, 00:24:25.306 "compare_and_write": false, 00:24:25.306 "abort": true, 00:24:25.306 "nvme_admin": false, 00:24:25.306 "nvme_io": false 00:24:25.306 }, 00:24:25.306 "memory_domains": [ 00:24:25.306 { 00:24:25.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.306 "dma_device_type": 2 00:24:25.306 } 00:24:25.306 ], 00:24:25.306 "driver_specific": {} 00:24:25.306 } 00:24:25.306 ] 00:24:25.306 13:09:44 -- common/autotest_common.sh@895 -- # return 0 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.306 13:09:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.564 13:09:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:25.564 "name": "Existed_Raid", 00:24:25.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.564 "strip_size_kb": 64, 00:24:25.564 "state": "configuring", 00:24:25.564 "raid_level": "raid5f", 00:24:25.564 "superblock": false, 00:24:25.564 "num_base_bdevs": 4, 00:24:25.564 "num_base_bdevs_discovered": 2, 00:24:25.564 "num_base_bdevs_operational": 4, 00:24:25.564 "base_bdevs_list": [ 00:24:25.564 { 00:24:25.564 "name": "BaseBdev1", 00:24:25.564 "uuid": "0a692dc9-f697-4b40-8d8d-de924da05f20", 00:24:25.564 "is_configured": true, 00:24:25.564 "data_offset": 0, 00:24:25.564 "data_size": 65536 00:24:25.564 }, 00:24:25.564 { 00:24:25.564 "name": "BaseBdev2", 00:24:25.564 "uuid": "b3ad1d87-deb9-4cf0-afd0-54143ecb21d5", 00:24:25.564 "is_configured": true, 00:24:25.564 "data_offset": 0, 00:24:25.564 "data_size": 65536 00:24:25.564 }, 00:24:25.564 { 00:24:25.564 "name": "BaseBdev3", 00:24:25.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.564 "is_configured": false, 00:24:25.564 "data_offset": 0, 00:24:25.564 "data_size": 0 00:24:25.564 }, 00:24:25.564 { 00:24:25.564 "name": "BaseBdev4", 00:24:25.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.564 "is_configured": false, 00:24:25.564 "data_offset": 0, 00:24:25.564 "data_size": 0 00:24:25.564 } 00:24:25.564 ] 00:24:25.564 }' 00:24:25.564 13:09:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:25.564 13:09:44 -- common/autotest_common.sh@10 -- # set +x 00:24:26.130 13:09:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:26.389 [2024-06-11 13:09:45.147705] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:26.389 BaseBdev3 00:24:26.389 13:09:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:26.389 13:09:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:26.389 13:09:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:26.389 13:09:45 -- common/autotest_common.sh@889 -- # local i 00:24:26.389 13:09:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:26.389 13:09:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:26.389 13:09:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:26.647 13:09:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:26.905 [ 00:24:26.905 { 00:24:26.905 "name": "BaseBdev3", 00:24:26.905 "aliases": [ 00:24:26.905 "b5c3ebb0-6c95-4f56-a8d3-e7ac0fd65dde" 00:24:26.905 ], 00:24:26.905 "product_name": "Malloc disk", 00:24:26.905 "block_size": 512, 00:24:26.905 "num_blocks": 65536, 00:24:26.905 "uuid": "b5c3ebb0-6c95-4f56-a8d3-e7ac0fd65dde", 00:24:26.905 "assigned_rate_limits": { 00:24:26.905 "rw_ios_per_sec": 0, 00:24:26.905 "rw_mbytes_per_sec": 0, 00:24:26.905 "r_mbytes_per_sec": 0, 00:24:26.905 "w_mbytes_per_sec": 0 00:24:26.905 }, 00:24:26.905 "claimed": true, 00:24:26.905 "claim_type": "exclusive_write", 00:24:26.905 "zoned": false, 00:24:26.905 "supported_io_types": { 00:24:26.905 "read": true, 00:24:26.905 "write": true, 00:24:26.905 "unmap": true, 00:24:26.905 "write_zeroes": true, 00:24:26.905 "flush": true, 00:24:26.905 "reset": true, 00:24:26.905 "compare": false, 00:24:26.905 "compare_and_write": false, 00:24:26.905 "abort": true, 00:24:26.905 "nvme_admin": false, 00:24:26.905 "nvme_io": false 00:24:26.905 }, 00:24:26.905 "memory_domains": [ 00:24:26.905 { 00:24:26.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.905 "dma_device_type": 2 00:24:26.905 } 00:24:26.905 ], 00:24:26.905 "driver_specific": {} 00:24:26.905 } 00:24:26.905 ] 00:24:26.905 13:09:45 -- common/autotest_common.sh@895 -- # return 0 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.905 13:09:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.163 13:09:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:27.163 "name": "Existed_Raid", 00:24:27.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.163 "strip_size_kb": 64, 00:24:27.163 "state": "configuring", 00:24:27.163 "raid_level": "raid5f", 00:24:27.163 "superblock": false, 00:24:27.163 "num_base_bdevs": 4, 00:24:27.163 "num_base_bdevs_discovered": 3, 00:24:27.163 "num_base_bdevs_operational": 4, 00:24:27.163 "base_bdevs_list": [ 00:24:27.163 { 00:24:27.163 "name": "BaseBdev1", 00:24:27.163 "uuid": "0a692dc9-f697-4b40-8d8d-de924da05f20", 00:24:27.163 "is_configured": true, 00:24:27.163 "data_offset": 0, 00:24:27.163 "data_size": 65536 00:24:27.163 }, 00:24:27.163 { 00:24:27.163 "name": "BaseBdev2", 00:24:27.163 "uuid": "b3ad1d87-deb9-4cf0-afd0-54143ecb21d5", 00:24:27.163 "is_configured": true, 00:24:27.163 "data_offset": 0, 00:24:27.163 "data_size": 65536 00:24:27.163 }, 00:24:27.163 { 00:24:27.163 "name": "BaseBdev3", 00:24:27.164 "uuid": "b5c3ebb0-6c95-4f56-a8d3-e7ac0fd65dde", 00:24:27.164 "is_configured": true, 00:24:27.164 "data_offset": 0, 00:24:27.164 "data_size": 65536 00:24:27.164 }, 00:24:27.164 { 00:24:27.164 "name": "BaseBdev4", 00:24:27.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.164 "is_configured": false, 00:24:27.164 "data_offset": 0, 00:24:27.164 "data_size": 0 00:24:27.164 } 00:24:27.164 ] 00:24:27.164 }' 00:24:27.164 13:09:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:27.164 13:09:45 -- common/autotest_common.sh@10 -- # set +x 00:24:27.730 13:09:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:27.989 [2024-06-11 13:09:46.663662] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:27.989 [2024-06-11 13:09:46.663979] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:24:27.989 [2024-06-11 13:09:46.664025] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:27.989 [2024-06-11 13:09:46.664248] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:27.989 [2024-06-11 13:09:46.670364] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:24:27.989 [2024-06-11 13:09:46.670517] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:24:27.989 BaseBdev4 00:24:27.989 [2024-06-11 13:09:46.670903] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.989 13:09:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:27.989 13:09:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:27.989 13:09:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:27.989 13:09:46 -- common/autotest_common.sh@889 -- # local i 00:24:27.989 13:09:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:27.989 13:09:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:27.989 13:09:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:28.248 13:09:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:28.248 [ 00:24:28.248 { 00:24:28.248 "name": "BaseBdev4", 00:24:28.248 "aliases": [ 00:24:28.248 "64f02a36-1779-4dd8-a1be-42f325ae2cf8" 00:24:28.248 ], 00:24:28.248 "product_name": "Malloc disk", 00:24:28.248 "block_size": 512, 00:24:28.248 "num_blocks": 65536, 00:24:28.248 "uuid": "64f02a36-1779-4dd8-a1be-42f325ae2cf8", 00:24:28.248 "assigned_rate_limits": { 00:24:28.248 "rw_ios_per_sec": 0, 00:24:28.248 "rw_mbytes_per_sec": 0, 00:24:28.248 "r_mbytes_per_sec": 0, 00:24:28.249 "w_mbytes_per_sec": 0 00:24:28.249 }, 00:24:28.249 "claimed": true, 00:24:28.249 "claim_type": "exclusive_write", 00:24:28.249 "zoned": false, 00:24:28.249 "supported_io_types": { 00:24:28.249 "read": true, 00:24:28.249 "write": true, 00:24:28.249 "unmap": true, 00:24:28.249 "write_zeroes": true, 00:24:28.249 "flush": true, 00:24:28.249 "reset": true, 00:24:28.249 "compare": false, 00:24:28.249 "compare_and_write": false, 00:24:28.249 "abort": true, 00:24:28.249 "nvme_admin": false, 00:24:28.249 "nvme_io": false 00:24:28.249 }, 00:24:28.249 "memory_domains": [ 00:24:28.249 { 00:24:28.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.249 "dma_device_type": 2 00:24:28.249 } 00:24:28.249 ], 00:24:28.249 "driver_specific": {} 00:24:28.249 } 00:24:28.249 ] 00:24:28.249 13:09:47 -- common/autotest_common.sh@895 -- # return 0 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.249 13:09:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.507 13:09:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:28.507 "name": "Existed_Raid", 00:24:28.507 "uuid": "254fe25c-8914-48ea-968a-b065b29d4a1a", 00:24:28.507 "strip_size_kb": 64, 00:24:28.507 "state": "online", 00:24:28.507 "raid_level": "raid5f", 00:24:28.507 "superblock": false, 00:24:28.507 "num_base_bdevs": 4, 00:24:28.507 "num_base_bdevs_discovered": 4, 00:24:28.507 "num_base_bdevs_operational": 4, 00:24:28.507 "base_bdevs_list": [ 00:24:28.507 { 00:24:28.507 "name": "BaseBdev1", 00:24:28.507 "uuid": "0a692dc9-f697-4b40-8d8d-de924da05f20", 00:24:28.507 "is_configured": true, 00:24:28.507 "data_offset": 0, 00:24:28.507 "data_size": 65536 00:24:28.507 }, 00:24:28.507 { 00:24:28.507 "name": "BaseBdev2", 00:24:28.507 "uuid": "b3ad1d87-deb9-4cf0-afd0-54143ecb21d5", 00:24:28.507 "is_configured": true, 00:24:28.507 "data_offset": 0, 00:24:28.507 "data_size": 65536 00:24:28.507 }, 00:24:28.507 { 00:24:28.507 "name": "BaseBdev3", 00:24:28.507 "uuid": "b5c3ebb0-6c95-4f56-a8d3-e7ac0fd65dde", 00:24:28.507 "is_configured": true, 00:24:28.507 "data_offset": 0, 00:24:28.507 "data_size": 65536 00:24:28.507 }, 00:24:28.507 { 00:24:28.507 "name": "BaseBdev4", 00:24:28.507 "uuid": "64f02a36-1779-4dd8-a1be-42f325ae2cf8", 00:24:28.507 "is_configured": true, 00:24:28.507 "data_offset": 0, 00:24:28.507 "data_size": 65536 00:24:28.507 } 00:24:28.507 ] 00:24:28.507 }' 00:24:28.507 13:09:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:28.507 13:09:47 -- common/autotest_common.sh@10 -- # set +x 00:24:29.074 13:09:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:29.332 [2024-06-11 13:09:48.137612] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:29.590 13:09:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:29.590 13:09:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:29.590 13:09:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:29.590 13:09:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.591 13:09:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.849 13:09:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:29.849 "name": "Existed_Raid", 00:24:29.849 "uuid": "254fe25c-8914-48ea-968a-b065b29d4a1a", 00:24:29.849 "strip_size_kb": 64, 00:24:29.849 "state": "online", 00:24:29.849 "raid_level": "raid5f", 00:24:29.849 "superblock": false, 00:24:29.849 "num_base_bdevs": 4, 00:24:29.849 "num_base_bdevs_discovered": 3, 00:24:29.849 "num_base_bdevs_operational": 3, 00:24:29.849 "base_bdevs_list": [ 00:24:29.849 { 00:24:29.849 "name": null, 00:24:29.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.849 "is_configured": false, 00:24:29.849 "data_offset": 0, 00:24:29.849 "data_size": 65536 00:24:29.849 }, 00:24:29.849 { 00:24:29.849 "name": "BaseBdev2", 00:24:29.849 "uuid": "b3ad1d87-deb9-4cf0-afd0-54143ecb21d5", 00:24:29.849 "is_configured": true, 00:24:29.849 "data_offset": 0, 00:24:29.849 "data_size": 65536 00:24:29.849 }, 00:24:29.849 { 00:24:29.849 "name": "BaseBdev3", 00:24:29.849 "uuid": "b5c3ebb0-6c95-4f56-a8d3-e7ac0fd65dde", 00:24:29.849 "is_configured": true, 00:24:29.849 "data_offset": 0, 00:24:29.849 "data_size": 65536 00:24:29.849 }, 00:24:29.849 { 00:24:29.849 "name": "BaseBdev4", 00:24:29.849 "uuid": "64f02a36-1779-4dd8-a1be-42f325ae2cf8", 00:24:29.849 "is_configured": true, 00:24:29.849 "data_offset": 0, 00:24:29.849 "data_size": 65536 00:24:29.849 } 00:24:29.849 ] 00:24:29.849 }' 00:24:29.849 13:09:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:29.849 13:09:48 -- common/autotest_common.sh@10 -- # set +x 00:24:30.416 13:09:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:30.416 13:09:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:30.416 13:09:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.416 13:09:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:30.680 13:09:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:30.680 13:09:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:30.680 13:09:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:30.680 [2024-06-11 13:09:49.474691] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:30.680 [2024-06-11 13:09:49.475004] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:30.680 [2024-06-11 13:09:49.475237] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:30.938 13:09:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:30.938 13:09:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:30.938 13:09:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.938 13:09:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:30.938 13:09:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:30.938 13:09:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:30.938 13:09:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:31.196 [2024-06-11 13:09:49.967484] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:31.455 13:09:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:31.455 13:09:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:31.455 13:09:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:31.455 13:09:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.455 13:09:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:31.455 13:09:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:31.455 13:09:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:31.713 [2024-06-11 13:09:50.459464] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:31.713 [2024-06-11 13:09:50.459752] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:24:31.713 13:09:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:31.713 13:09:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:31.713 13:09:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.713 13:09:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:31.971 13:09:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:31.971 13:09:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:31.971 13:09:50 -- bdev/bdev_raid.sh@287 -- # killprocess 133240 00:24:31.971 13:09:50 -- common/autotest_common.sh@926 -- # '[' -z 133240 ']' 00:24:31.971 13:09:50 -- common/autotest_common.sh@930 -- # kill -0 133240 00:24:31.971 13:09:50 -- common/autotest_common.sh@931 -- # uname 00:24:31.971 13:09:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:31.971 13:09:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133240 00:24:31.971 killing process with pid 133240 00:24:31.971 13:09:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:31.971 13:09:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:31.971 13:09:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133240' 00:24:31.971 13:09:50 -- common/autotest_common.sh@945 -- # kill 133240 00:24:31.971 13:09:50 -- common/autotest_common.sh@950 -- # wait 133240 00:24:31.971 [2024-06-11 13:09:50.806228] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:31.971 [2024-06-11 13:09:50.806374] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:33.347 ************************************ 00:24:33.347 END TEST raid5f_state_function_test 00:24:33.348 ************************************ 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:33.348 00:24:33.348 real 0m13.945s 00:24:33.348 user 0m25.012s 00:24:33.348 sys 0m1.582s 00:24:33.348 13:09:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.348 13:09:51 -- common/autotest_common.sh@10 -- # set +x 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:33.348 13:09:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:33.348 13:09:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:33.348 13:09:51 -- common/autotest_common.sh@10 -- # set +x 00:24:33.348 ************************************ 00:24:33.348 START TEST raid5f_state_function_test_sb 00:24:33.348 ************************************ 00:24:33.348 13:09:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=133702 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133702' 00:24:33.348 Process raid pid: 133702 00:24:33.348 13:09:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133702 /var/tmp/spdk-raid.sock 00:24:33.348 13:09:51 -- common/autotest_common.sh@819 -- # '[' -z 133702 ']' 00:24:33.348 13:09:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:33.348 13:09:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:33.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:33.348 13:09:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:33.348 13:09:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:33.348 13:09:51 -- common/autotest_common.sh@10 -- # set +x 00:24:33.348 [2024-06-11 13:09:51.957143] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:33.348 [2024-06-11 13:09:51.957539] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.348 [2024-06-11 13:09:52.119226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.606 [2024-06-11 13:09:52.353353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.863 [2024-06-11 13:09:52.528244] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.123 13:09:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:34.123 13:09:52 -- common/autotest_common.sh@852 -- # return 0 00:24:34.123 13:09:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:34.383 [2024-06-11 13:09:53.084717] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:34.383 [2024-06-11 13:09:53.084944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:34.383 [2024-06-11 13:09:53.085076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:34.383 [2024-06-11 13:09:53.085191] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:34.383 [2024-06-11 13:09:53.085295] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:34.383 [2024-06-11 13:09:53.085376] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:34.383 [2024-06-11 13:09:53.085535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:34.383 [2024-06-11 13:09:53.085680] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.383 13:09:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.641 13:09:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:34.641 "name": "Existed_Raid", 00:24:34.641 "uuid": "8bcf612a-4b60-4d9e-a100-974ff011eddf", 00:24:34.641 "strip_size_kb": 64, 00:24:34.641 "state": "configuring", 00:24:34.641 "raid_level": "raid5f", 00:24:34.641 "superblock": true, 00:24:34.641 "num_base_bdevs": 4, 00:24:34.641 "num_base_bdevs_discovered": 0, 00:24:34.641 "num_base_bdevs_operational": 4, 00:24:34.641 "base_bdevs_list": [ 00:24:34.641 { 00:24:34.641 "name": "BaseBdev1", 00:24:34.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.641 "is_configured": false, 00:24:34.641 "data_offset": 0, 00:24:34.641 "data_size": 0 00:24:34.641 }, 00:24:34.641 { 00:24:34.641 "name": "BaseBdev2", 00:24:34.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.641 "is_configured": false, 00:24:34.641 "data_offset": 0, 00:24:34.641 "data_size": 0 00:24:34.641 }, 00:24:34.641 { 00:24:34.641 "name": "BaseBdev3", 00:24:34.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.641 "is_configured": false, 00:24:34.641 "data_offset": 0, 00:24:34.641 "data_size": 0 00:24:34.641 }, 00:24:34.641 { 00:24:34.641 "name": "BaseBdev4", 00:24:34.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.641 "is_configured": false, 00:24:34.641 "data_offset": 0, 00:24:34.641 "data_size": 0 00:24:34.641 } 00:24:34.641 ] 00:24:34.641 }' 00:24:34.641 13:09:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:34.641 13:09:53 -- common/autotest_common.sh@10 -- # set +x 00:24:35.209 13:09:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:35.467 [2024-06-11 13:09:54.108870] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:35.467 [2024-06-11 13:09:54.109501] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:35.468 13:09:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:35.726 [2024-06-11 13:09:54.365010] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:35.726 [2024-06-11 13:09:54.365249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:35.726 [2024-06-11 13:09:54.365370] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:35.726 [2024-06-11 13:09:54.365466] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:35.726 [2024-06-11 13:09:54.365571] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:35.726 [2024-06-11 13:09:54.365660] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:35.726 [2024-06-11 13:09:54.365891] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:35.726 [2024-06-11 13:09:54.365950] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:35.726 13:09:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:35.985 [2024-06-11 13:09:54.600098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:35.985 BaseBdev1 00:24:35.985 13:09:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:35.985 13:09:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:35.985 13:09:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:35.985 13:09:54 -- common/autotest_common.sh@889 -- # local i 00:24:35.985 13:09:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:35.985 13:09:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:35.985 13:09:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:35.985 13:09:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:36.243 [ 00:24:36.243 { 00:24:36.243 "name": "BaseBdev1", 00:24:36.243 "aliases": [ 00:24:36.243 "fa2f8188-5a32-40c5-ad62-3618f01d2e1d" 00:24:36.243 ], 00:24:36.243 "product_name": "Malloc disk", 00:24:36.243 "block_size": 512, 00:24:36.243 "num_blocks": 65536, 00:24:36.243 "uuid": "fa2f8188-5a32-40c5-ad62-3618f01d2e1d", 00:24:36.243 "assigned_rate_limits": { 00:24:36.243 "rw_ios_per_sec": 0, 00:24:36.243 "rw_mbytes_per_sec": 0, 00:24:36.243 "r_mbytes_per_sec": 0, 00:24:36.243 "w_mbytes_per_sec": 0 00:24:36.243 }, 00:24:36.243 "claimed": true, 00:24:36.243 "claim_type": "exclusive_write", 00:24:36.243 "zoned": false, 00:24:36.243 "supported_io_types": { 00:24:36.243 "read": true, 00:24:36.243 "write": true, 00:24:36.243 "unmap": true, 00:24:36.243 "write_zeroes": true, 00:24:36.243 "flush": true, 00:24:36.243 "reset": true, 00:24:36.243 "compare": false, 00:24:36.243 "compare_and_write": false, 00:24:36.243 "abort": true, 00:24:36.243 "nvme_admin": false, 00:24:36.243 "nvme_io": false 00:24:36.243 }, 00:24:36.243 "memory_domains": [ 00:24:36.243 { 00:24:36.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.243 "dma_device_type": 2 00:24:36.243 } 00:24:36.243 ], 00:24:36.243 "driver_specific": {} 00:24:36.243 } 00:24:36.243 ] 00:24:36.243 13:09:55 -- common/autotest_common.sh@895 -- # return 0 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.243 13:09:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.502 13:09:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.502 "name": "Existed_Raid", 00:24:36.502 "uuid": "05afd111-1ae5-4e19-b78f-64b6227911c1", 00:24:36.502 "strip_size_kb": 64, 00:24:36.502 "state": "configuring", 00:24:36.502 "raid_level": "raid5f", 00:24:36.502 "superblock": true, 00:24:36.502 "num_base_bdevs": 4, 00:24:36.502 "num_base_bdevs_discovered": 1, 00:24:36.502 "num_base_bdevs_operational": 4, 00:24:36.502 "base_bdevs_list": [ 00:24:36.502 { 00:24:36.502 "name": "BaseBdev1", 00:24:36.502 "uuid": "fa2f8188-5a32-40c5-ad62-3618f01d2e1d", 00:24:36.502 "is_configured": true, 00:24:36.502 "data_offset": 2048, 00:24:36.502 "data_size": 63488 00:24:36.502 }, 00:24:36.502 { 00:24:36.502 "name": "BaseBdev2", 00:24:36.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.502 "is_configured": false, 00:24:36.502 "data_offset": 0, 00:24:36.502 "data_size": 0 00:24:36.502 }, 00:24:36.502 { 00:24:36.502 "name": "BaseBdev3", 00:24:36.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.502 "is_configured": false, 00:24:36.502 "data_offset": 0, 00:24:36.502 "data_size": 0 00:24:36.502 }, 00:24:36.502 { 00:24:36.502 "name": "BaseBdev4", 00:24:36.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.502 "is_configured": false, 00:24:36.502 "data_offset": 0, 00:24:36.502 "data_size": 0 00:24:36.502 } 00:24:36.502 ] 00:24:36.502 }' 00:24:36.502 13:09:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.502 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:24:37.068 13:09:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:37.326 [2024-06-11 13:09:55.968441] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:37.327 [2024-06-11 13:09:55.968631] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:37.327 13:09:55 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:37.327 13:09:55 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:37.585 13:09:56 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:37.843 BaseBdev1 00:24:37.843 13:09:56 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:37.843 13:09:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:37.843 13:09:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:37.843 13:09:56 -- common/autotest_common.sh@889 -- # local i 00:24:37.843 13:09:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:37.843 13:09:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:37.843 13:09:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:38.101 13:09:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:38.101 [ 00:24:38.101 { 00:24:38.101 "name": "BaseBdev1", 00:24:38.101 "aliases": [ 00:24:38.101 "93ec23b2-f179-4eae-b921-39c51636477e" 00:24:38.101 ], 00:24:38.101 "product_name": "Malloc disk", 00:24:38.101 "block_size": 512, 00:24:38.101 "num_blocks": 65536, 00:24:38.101 "uuid": "93ec23b2-f179-4eae-b921-39c51636477e", 00:24:38.101 "assigned_rate_limits": { 00:24:38.101 "rw_ios_per_sec": 0, 00:24:38.101 "rw_mbytes_per_sec": 0, 00:24:38.101 "r_mbytes_per_sec": 0, 00:24:38.101 "w_mbytes_per_sec": 0 00:24:38.101 }, 00:24:38.101 "claimed": false, 00:24:38.101 "zoned": false, 00:24:38.101 "supported_io_types": { 00:24:38.101 "read": true, 00:24:38.101 "write": true, 00:24:38.101 "unmap": true, 00:24:38.101 "write_zeroes": true, 00:24:38.101 "flush": true, 00:24:38.101 "reset": true, 00:24:38.101 "compare": false, 00:24:38.101 "compare_and_write": false, 00:24:38.101 "abort": true, 00:24:38.101 "nvme_admin": false, 00:24:38.101 "nvme_io": false 00:24:38.101 }, 00:24:38.101 "memory_domains": [ 00:24:38.101 { 00:24:38.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.101 "dma_device_type": 2 00:24:38.101 } 00:24:38.101 ], 00:24:38.101 "driver_specific": {} 00:24:38.101 } 00:24:38.101 ] 00:24:38.360 13:09:56 -- common/autotest_common.sh@895 -- # return 0 00:24:38.360 13:09:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:38.360 [2024-06-11 13:09:57.140000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.360 [2024-06-11 13:09:57.141996] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:38.360 [2024-06-11 13:09:57.142621] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:38.360 [2024-06-11 13:09:57.142811] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:38.360 [2024-06-11 13:09:57.143019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:38.360 [2024-06-11 13:09:57.143184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:38.360 [2024-06-11 13:09:57.143336] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.360 13:09:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.619 13:09:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.619 "name": "Existed_Raid", 00:24:38.619 "uuid": "1de47a7a-50cf-4ea5-97be-b08c48c487a6", 00:24:38.619 "strip_size_kb": 64, 00:24:38.619 "state": "configuring", 00:24:38.619 "raid_level": "raid5f", 00:24:38.619 "superblock": true, 00:24:38.619 "num_base_bdevs": 4, 00:24:38.619 "num_base_bdevs_discovered": 1, 00:24:38.619 "num_base_bdevs_operational": 4, 00:24:38.619 "base_bdevs_list": [ 00:24:38.619 { 00:24:38.619 "name": "BaseBdev1", 00:24:38.619 "uuid": "93ec23b2-f179-4eae-b921-39c51636477e", 00:24:38.619 "is_configured": true, 00:24:38.619 "data_offset": 2048, 00:24:38.619 "data_size": 63488 00:24:38.619 }, 00:24:38.619 { 00:24:38.619 "name": "BaseBdev2", 00:24:38.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.619 "is_configured": false, 00:24:38.619 "data_offset": 0, 00:24:38.619 "data_size": 0 00:24:38.619 }, 00:24:38.619 { 00:24:38.619 "name": "BaseBdev3", 00:24:38.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.619 "is_configured": false, 00:24:38.619 "data_offset": 0, 00:24:38.619 "data_size": 0 00:24:38.619 }, 00:24:38.619 { 00:24:38.619 "name": "BaseBdev4", 00:24:38.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.619 "is_configured": false, 00:24:38.619 "data_offset": 0, 00:24:38.619 "data_size": 0 00:24:38.619 } 00:24:38.619 ] 00:24:38.619 }' 00:24:38.619 13:09:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.619 13:09:57 -- common/autotest_common.sh@10 -- # set +x 00:24:39.185 13:09:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:39.443 [2024-06-11 13:09:58.238020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:39.443 BaseBdev2 00:24:39.443 13:09:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:39.444 13:09:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:39.444 13:09:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:39.444 13:09:58 -- common/autotest_common.sh@889 -- # local i 00:24:39.444 13:09:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:39.444 13:09:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:39.444 13:09:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:39.702 13:09:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:39.960 [ 00:24:39.960 { 00:24:39.960 "name": "BaseBdev2", 00:24:39.960 "aliases": [ 00:24:39.960 "8fbb6338-3e3d-438e-afc8-4491e5b4ea07" 00:24:39.960 ], 00:24:39.960 "product_name": "Malloc disk", 00:24:39.960 "block_size": 512, 00:24:39.960 "num_blocks": 65536, 00:24:39.960 "uuid": "8fbb6338-3e3d-438e-afc8-4491e5b4ea07", 00:24:39.960 "assigned_rate_limits": { 00:24:39.960 "rw_ios_per_sec": 0, 00:24:39.960 "rw_mbytes_per_sec": 0, 00:24:39.960 "r_mbytes_per_sec": 0, 00:24:39.960 "w_mbytes_per_sec": 0 00:24:39.960 }, 00:24:39.960 "claimed": true, 00:24:39.960 "claim_type": "exclusive_write", 00:24:39.960 "zoned": false, 00:24:39.960 "supported_io_types": { 00:24:39.960 "read": true, 00:24:39.960 "write": true, 00:24:39.960 "unmap": true, 00:24:39.960 "write_zeroes": true, 00:24:39.960 "flush": true, 00:24:39.960 "reset": true, 00:24:39.960 "compare": false, 00:24:39.960 "compare_and_write": false, 00:24:39.960 "abort": true, 00:24:39.960 "nvme_admin": false, 00:24:39.960 "nvme_io": false 00:24:39.960 }, 00:24:39.961 "memory_domains": [ 00:24:39.961 { 00:24:39.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.961 "dma_device_type": 2 00:24:39.961 } 00:24:39.961 ], 00:24:39.961 "driver_specific": {} 00:24:39.961 } 00:24:39.961 ] 00:24:39.961 13:09:58 -- common/autotest_common.sh@895 -- # return 0 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.961 13:09:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.219 13:09:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:40.219 "name": "Existed_Raid", 00:24:40.219 "uuid": "1de47a7a-50cf-4ea5-97be-b08c48c487a6", 00:24:40.219 "strip_size_kb": 64, 00:24:40.219 "state": "configuring", 00:24:40.219 "raid_level": "raid5f", 00:24:40.219 "superblock": true, 00:24:40.219 "num_base_bdevs": 4, 00:24:40.219 "num_base_bdevs_discovered": 2, 00:24:40.219 "num_base_bdevs_operational": 4, 00:24:40.219 "base_bdevs_list": [ 00:24:40.219 { 00:24:40.219 "name": "BaseBdev1", 00:24:40.219 "uuid": "93ec23b2-f179-4eae-b921-39c51636477e", 00:24:40.219 "is_configured": true, 00:24:40.219 "data_offset": 2048, 00:24:40.219 "data_size": 63488 00:24:40.219 }, 00:24:40.219 { 00:24:40.219 "name": "BaseBdev2", 00:24:40.219 "uuid": "8fbb6338-3e3d-438e-afc8-4491e5b4ea07", 00:24:40.219 "is_configured": true, 00:24:40.219 "data_offset": 2048, 00:24:40.219 "data_size": 63488 00:24:40.219 }, 00:24:40.219 { 00:24:40.219 "name": "BaseBdev3", 00:24:40.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.219 "is_configured": false, 00:24:40.219 "data_offset": 0, 00:24:40.219 "data_size": 0 00:24:40.219 }, 00:24:40.219 { 00:24:40.219 "name": "BaseBdev4", 00:24:40.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.219 "is_configured": false, 00:24:40.219 "data_offset": 0, 00:24:40.219 "data_size": 0 00:24:40.219 } 00:24:40.219 ] 00:24:40.219 }' 00:24:40.219 13:09:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:40.219 13:09:58 -- common/autotest_common.sh@10 -- # set +x 00:24:40.785 13:09:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:41.044 [2024-06-11 13:09:59.815639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:41.044 BaseBdev3 00:24:41.044 13:09:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:41.044 13:09:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:41.044 13:09:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:41.044 13:09:59 -- common/autotest_common.sh@889 -- # local i 00:24:41.044 13:09:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:41.044 13:09:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:41.044 13:09:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:41.302 13:10:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:41.560 [ 00:24:41.560 { 00:24:41.560 "name": "BaseBdev3", 00:24:41.560 "aliases": [ 00:24:41.560 "0446bffc-ce1f-4239-b302-dd24035c93a8" 00:24:41.560 ], 00:24:41.560 "product_name": "Malloc disk", 00:24:41.560 "block_size": 512, 00:24:41.560 "num_blocks": 65536, 00:24:41.560 "uuid": "0446bffc-ce1f-4239-b302-dd24035c93a8", 00:24:41.560 "assigned_rate_limits": { 00:24:41.560 "rw_ios_per_sec": 0, 00:24:41.560 "rw_mbytes_per_sec": 0, 00:24:41.560 "r_mbytes_per_sec": 0, 00:24:41.560 "w_mbytes_per_sec": 0 00:24:41.560 }, 00:24:41.560 "claimed": true, 00:24:41.560 "claim_type": "exclusive_write", 00:24:41.560 "zoned": false, 00:24:41.560 "supported_io_types": { 00:24:41.560 "read": true, 00:24:41.560 "write": true, 00:24:41.560 "unmap": true, 00:24:41.560 "write_zeroes": true, 00:24:41.560 "flush": true, 00:24:41.560 "reset": true, 00:24:41.560 "compare": false, 00:24:41.560 "compare_and_write": false, 00:24:41.560 "abort": true, 00:24:41.560 "nvme_admin": false, 00:24:41.560 "nvme_io": false 00:24:41.560 }, 00:24:41.560 "memory_domains": [ 00:24:41.560 { 00:24:41.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.560 "dma_device_type": 2 00:24:41.560 } 00:24:41.560 ], 00:24:41.560 "driver_specific": {} 00:24:41.560 } 00:24:41.560 ] 00:24:41.560 13:10:00 -- common/autotest_common.sh@895 -- # return 0 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.560 13:10:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.819 13:10:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.819 "name": "Existed_Raid", 00:24:41.819 "uuid": "1de47a7a-50cf-4ea5-97be-b08c48c487a6", 00:24:41.819 "strip_size_kb": 64, 00:24:41.819 "state": "configuring", 00:24:41.819 "raid_level": "raid5f", 00:24:41.819 "superblock": true, 00:24:41.819 "num_base_bdevs": 4, 00:24:41.819 "num_base_bdevs_discovered": 3, 00:24:41.819 "num_base_bdevs_operational": 4, 00:24:41.819 "base_bdevs_list": [ 00:24:41.819 { 00:24:41.819 "name": "BaseBdev1", 00:24:41.819 "uuid": "93ec23b2-f179-4eae-b921-39c51636477e", 00:24:41.819 "is_configured": true, 00:24:41.819 "data_offset": 2048, 00:24:41.819 "data_size": 63488 00:24:41.819 }, 00:24:41.819 { 00:24:41.819 "name": "BaseBdev2", 00:24:41.819 "uuid": "8fbb6338-3e3d-438e-afc8-4491e5b4ea07", 00:24:41.819 "is_configured": true, 00:24:41.819 "data_offset": 2048, 00:24:41.819 "data_size": 63488 00:24:41.819 }, 00:24:41.819 { 00:24:41.819 "name": "BaseBdev3", 00:24:41.819 "uuid": "0446bffc-ce1f-4239-b302-dd24035c93a8", 00:24:41.819 "is_configured": true, 00:24:41.819 "data_offset": 2048, 00:24:41.819 "data_size": 63488 00:24:41.819 }, 00:24:41.819 { 00:24:41.819 "name": "BaseBdev4", 00:24:41.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.819 "is_configured": false, 00:24:41.819 "data_offset": 0, 00:24:41.819 "data_size": 0 00:24:41.819 } 00:24:41.819 ] 00:24:41.819 }' 00:24:41.819 13:10:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.819 13:10:00 -- common/autotest_common.sh@10 -- # set +x 00:24:42.386 13:10:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:42.645 [2024-06-11 13:10:01.410055] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:42.645 [2024-06-11 13:10:01.410507] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:24:42.645 [2024-06-11 13:10:01.410625] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:42.645 BaseBdev4 00:24:42.645 [2024-06-11 13:10:01.410772] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:42.645 [2024-06-11 13:10:01.417555] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:24:42.645 [2024-06-11 13:10:01.417700] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:24:42.645 [2024-06-11 13:10:01.418001] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:42.645 13:10:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:42.645 13:10:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:42.645 13:10:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:42.645 13:10:01 -- common/autotest_common.sh@889 -- # local i 00:24:42.645 13:10:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:42.645 13:10:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:42.645 13:10:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.904 13:10:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:43.162 [ 00:24:43.162 { 00:24:43.162 "name": "BaseBdev4", 00:24:43.162 "aliases": [ 00:24:43.162 "6b85b875-3472-4b86-8a68-d037235e8358" 00:24:43.162 ], 00:24:43.162 "product_name": "Malloc disk", 00:24:43.162 "block_size": 512, 00:24:43.162 "num_blocks": 65536, 00:24:43.162 "uuid": "6b85b875-3472-4b86-8a68-d037235e8358", 00:24:43.162 "assigned_rate_limits": { 00:24:43.162 "rw_ios_per_sec": 0, 00:24:43.162 "rw_mbytes_per_sec": 0, 00:24:43.162 "r_mbytes_per_sec": 0, 00:24:43.162 "w_mbytes_per_sec": 0 00:24:43.162 }, 00:24:43.162 "claimed": true, 00:24:43.162 "claim_type": "exclusive_write", 00:24:43.162 "zoned": false, 00:24:43.162 "supported_io_types": { 00:24:43.162 "read": true, 00:24:43.162 "write": true, 00:24:43.162 "unmap": true, 00:24:43.162 "write_zeroes": true, 00:24:43.162 "flush": true, 00:24:43.162 "reset": true, 00:24:43.162 "compare": false, 00:24:43.162 "compare_and_write": false, 00:24:43.162 "abort": true, 00:24:43.162 "nvme_admin": false, 00:24:43.162 "nvme_io": false 00:24:43.162 }, 00:24:43.162 "memory_domains": [ 00:24:43.162 { 00:24:43.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.162 "dma_device_type": 2 00:24:43.162 } 00:24:43.162 ], 00:24:43.162 "driver_specific": {} 00:24:43.162 } 00:24:43.162 ] 00:24:43.162 13:10:01 -- common/autotest_common.sh@895 -- # return 0 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.162 13:10:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.421 13:10:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:43.421 "name": "Existed_Raid", 00:24:43.421 "uuid": "1de47a7a-50cf-4ea5-97be-b08c48c487a6", 00:24:43.421 "strip_size_kb": 64, 00:24:43.421 "state": "online", 00:24:43.421 "raid_level": "raid5f", 00:24:43.421 "superblock": true, 00:24:43.421 "num_base_bdevs": 4, 00:24:43.421 "num_base_bdevs_discovered": 4, 00:24:43.421 "num_base_bdevs_operational": 4, 00:24:43.421 "base_bdevs_list": [ 00:24:43.421 { 00:24:43.421 "name": "BaseBdev1", 00:24:43.421 "uuid": "93ec23b2-f179-4eae-b921-39c51636477e", 00:24:43.421 "is_configured": true, 00:24:43.421 "data_offset": 2048, 00:24:43.421 "data_size": 63488 00:24:43.421 }, 00:24:43.421 { 00:24:43.421 "name": "BaseBdev2", 00:24:43.421 "uuid": "8fbb6338-3e3d-438e-afc8-4491e5b4ea07", 00:24:43.421 "is_configured": true, 00:24:43.422 "data_offset": 2048, 00:24:43.422 "data_size": 63488 00:24:43.422 }, 00:24:43.422 { 00:24:43.422 "name": "BaseBdev3", 00:24:43.422 "uuid": "0446bffc-ce1f-4239-b302-dd24035c93a8", 00:24:43.422 "is_configured": true, 00:24:43.422 "data_offset": 2048, 00:24:43.422 "data_size": 63488 00:24:43.422 }, 00:24:43.422 { 00:24:43.422 "name": "BaseBdev4", 00:24:43.422 "uuid": "6b85b875-3472-4b86-8a68-d037235e8358", 00:24:43.422 "is_configured": true, 00:24:43.422 "data_offset": 2048, 00:24:43.422 "data_size": 63488 00:24:43.422 } 00:24:43.422 ] 00:24:43.422 }' 00:24:43.422 13:10:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:43.422 13:10:02 -- common/autotest_common.sh@10 -- # set +x 00:24:43.988 13:10:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:44.245 [2024-06-11 13:10:03.016768] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:44.503 13:10:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:44.503 13:10:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:44.503 13:10:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:44.503 13:10:03 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:44.503 13:10:03 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:44.503 13:10:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:44.503 13:10:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:44.503 13:10:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.504 13:10:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.761 13:10:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:44.761 "name": "Existed_Raid", 00:24:44.761 "uuid": "1de47a7a-50cf-4ea5-97be-b08c48c487a6", 00:24:44.761 "strip_size_kb": 64, 00:24:44.762 "state": "online", 00:24:44.762 "raid_level": "raid5f", 00:24:44.762 "superblock": true, 00:24:44.762 "num_base_bdevs": 4, 00:24:44.762 "num_base_bdevs_discovered": 3, 00:24:44.762 "num_base_bdevs_operational": 3, 00:24:44.762 "base_bdevs_list": [ 00:24:44.762 { 00:24:44.762 "name": null, 00:24:44.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.762 "is_configured": false, 00:24:44.762 "data_offset": 2048, 00:24:44.762 "data_size": 63488 00:24:44.762 }, 00:24:44.762 { 00:24:44.762 "name": "BaseBdev2", 00:24:44.762 "uuid": "8fbb6338-3e3d-438e-afc8-4491e5b4ea07", 00:24:44.762 "is_configured": true, 00:24:44.762 "data_offset": 2048, 00:24:44.762 "data_size": 63488 00:24:44.762 }, 00:24:44.762 { 00:24:44.762 "name": "BaseBdev3", 00:24:44.762 "uuid": "0446bffc-ce1f-4239-b302-dd24035c93a8", 00:24:44.762 "is_configured": true, 00:24:44.762 "data_offset": 2048, 00:24:44.762 "data_size": 63488 00:24:44.762 }, 00:24:44.762 { 00:24:44.762 "name": "BaseBdev4", 00:24:44.762 "uuid": "6b85b875-3472-4b86-8a68-d037235e8358", 00:24:44.762 "is_configured": true, 00:24:44.762 "data_offset": 2048, 00:24:44.762 "data_size": 63488 00:24:44.762 } 00:24:44.762 ] 00:24:44.762 }' 00:24:44.762 13:10:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:44.762 13:10:03 -- common/autotest_common.sh@10 -- # set +x 00:24:45.402 13:10:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:45.402 13:10:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:45.402 13:10:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.402 13:10:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:45.402 13:10:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:45.402 13:10:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:45.402 13:10:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:45.674 [2024-06-11 13:10:04.401949] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:45.674 [2024-06-11 13:10:04.402137] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:45.674 [2024-06-11 13:10:04.402310] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.674 13:10:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:45.674 13:10:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:45.674 13:10:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.674 13:10:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:45.933 13:10:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:45.933 13:10:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:45.933 13:10:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:46.192 [2024-06-11 13:10:04.926917] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:46.192 13:10:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:46.192 13:10:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:46.192 13:10:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.192 13:10:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:46.450 13:10:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:46.450 13:10:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:46.450 13:10:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:46.707 [2024-06-11 13:10:05.422779] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:46.707 [2024-06-11 13:10:05.423007] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:24:46.707 13:10:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:46.707 13:10:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:46.707 13:10:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.707 13:10:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:46.964 13:10:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:46.964 13:10:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:46.964 13:10:05 -- bdev/bdev_raid.sh@287 -- # killprocess 133702 00:24:46.964 13:10:05 -- common/autotest_common.sh@926 -- # '[' -z 133702 ']' 00:24:46.964 13:10:05 -- common/autotest_common.sh@930 -- # kill -0 133702 00:24:46.964 13:10:05 -- common/autotest_common.sh@931 -- # uname 00:24:46.964 13:10:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:46.964 13:10:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133702 00:24:46.964 killing process with pid 133702 00:24:46.964 13:10:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:46.964 13:10:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:46.964 13:10:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133702' 00:24:46.964 13:10:05 -- common/autotest_common.sh@945 -- # kill 133702 00:24:46.964 13:10:05 -- common/autotest_common.sh@950 -- # wait 133702 00:24:46.965 [2024-06-11 13:10:05.718817] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:46.965 [2024-06-11 13:10:05.718914] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:47.900 ************************************ 00:24:47.900 END TEST raid5f_state_function_test_sb 00:24:47.900 ************************************ 00:24:47.900 13:10:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:47.900 00:24:47.900 real 0m14.789s 00:24:47.900 user 0m26.567s 00:24:47.900 sys 0m1.658s 00:24:47.900 13:10:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.900 13:10:06 -- common/autotest_common.sh@10 -- # set +x 00:24:47.900 13:10:06 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:24:47.900 13:10:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:47.900 13:10:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:47.900 13:10:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.158 ************************************ 00:24:48.158 START TEST raid5f_superblock_test 00:24:48.158 ************************************ 00:24:48.158 13:10:06 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:48.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@357 -- # raid_pid=134164 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@358 -- # waitforlisten 134164 /var/tmp/spdk-raid.sock 00:24:48.158 13:10:06 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:48.158 13:10:06 -- common/autotest_common.sh@819 -- # '[' -z 134164 ']' 00:24:48.158 13:10:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:48.158 13:10:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:48.158 13:10:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:48.158 13:10:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:48.158 13:10:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.158 [2024-06-11 13:10:06.808189] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:48.158 [2024-06-11 13:10:06.809172] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134164 ] 00:24:48.158 [2024-06-11 13:10:06.979203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.416 [2024-06-11 13:10:07.202926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.673 [2024-06-11 13:10:07.376831] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.931 13:10:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:48.931 13:10:07 -- common/autotest_common.sh@852 -- # return 0 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:48.931 13:10:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:49.189 malloc1 00:24:49.189 13:10:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:49.447 [2024-06-11 13:10:08.098346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:49.447 [2024-06-11 13:10:08.098630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.447 [2024-06-11 13:10:08.098774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:49.447 [2024-06-11 13:10:08.098912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.447 [2024-06-11 13:10:08.101143] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.447 [2024-06-11 13:10:08.101306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:49.447 pt1 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:49.447 13:10:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:49.704 malloc2 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:49.704 [2024-06-11 13:10:08.496989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:49.704 [2024-06-11 13:10:08.497325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.704 [2024-06-11 13:10:08.497400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:49.704 [2024-06-11 13:10:08.497674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.704 [2024-06-11 13:10:08.500070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.704 [2024-06-11 13:10:08.500236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:49.704 pt2 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:49.704 13:10:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:49.962 malloc3 00:24:49.962 13:10:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:50.220 [2024-06-11 13:10:08.927452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:50.220 [2024-06-11 13:10:08.927709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.220 [2024-06-11 13:10:08.927785] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:50.220 [2024-06-11 13:10:08.928063] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.220 [2024-06-11 13:10:08.930781] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.220 [2024-06-11 13:10:08.930977] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:50.220 pt3 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:50.220 13:10:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:50.478 malloc4 00:24:50.478 13:10:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:50.737 [2024-06-11 13:10:09.428642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:50.737 [2024-06-11 13:10:09.429003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.737 [2024-06-11 13:10:09.429086] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:50.737 [2024-06-11 13:10:09.429232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.737 [2024-06-11 13:10:09.431534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.737 [2024-06-11 13:10:09.431717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:50.737 pt4 00:24:50.737 13:10:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:50.737 13:10:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:50.737 13:10:09 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:50.995 [2024-06-11 13:10:09.624717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:50.995 [2024-06-11 13:10:09.626744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:50.995 [2024-06-11 13:10:09.626929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:50.995 [2024-06-11 13:10:09.627045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:50.995 [2024-06-11 13:10:09.627339] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:50.995 [2024-06-11 13:10:09.627488] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:50.995 [2024-06-11 13:10:09.627631] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:50.995 [2024-06-11 13:10:09.633184] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:50.995 [2024-06-11 13:10:09.633318] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:50.995 [2024-06-11 13:10:09.633632] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:50.995 13:10:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:50.996 13:10:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.996 13:10:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.254 13:10:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.254 "name": "raid_bdev1", 00:24:51.254 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:24:51.254 "strip_size_kb": 64, 00:24:51.254 "state": "online", 00:24:51.254 "raid_level": "raid5f", 00:24:51.254 "superblock": true, 00:24:51.254 "num_base_bdevs": 4, 00:24:51.254 "num_base_bdevs_discovered": 4, 00:24:51.254 "num_base_bdevs_operational": 4, 00:24:51.254 "base_bdevs_list": [ 00:24:51.254 { 00:24:51.254 "name": "pt1", 00:24:51.254 "uuid": "a96421df-73e5-5d6c-8553-34a17521decb", 00:24:51.254 "is_configured": true, 00:24:51.254 "data_offset": 2048, 00:24:51.254 "data_size": 63488 00:24:51.254 }, 00:24:51.254 { 00:24:51.254 "name": "pt2", 00:24:51.254 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:24:51.254 "is_configured": true, 00:24:51.254 "data_offset": 2048, 00:24:51.254 "data_size": 63488 00:24:51.254 }, 00:24:51.254 { 00:24:51.254 "name": "pt3", 00:24:51.254 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:24:51.254 "is_configured": true, 00:24:51.254 "data_offset": 2048, 00:24:51.254 "data_size": 63488 00:24:51.254 }, 00:24:51.254 { 00:24:51.254 "name": "pt4", 00:24:51.254 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:24:51.254 "is_configured": true, 00:24:51.254 "data_offset": 2048, 00:24:51.254 "data_size": 63488 00:24:51.254 } 00:24:51.254 ] 00:24:51.254 }' 00:24:51.254 13:10:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.254 13:10:09 -- common/autotest_common.sh@10 -- # set +x 00:24:51.821 13:10:10 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:51.821 13:10:10 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:52.080 [2024-06-11 13:10:10.804309] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.080 13:10:10 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ed6a73d9-35e2-48cd-b420-40f3e445d769 00:24:52.080 13:10:10 -- bdev/bdev_raid.sh@380 -- # '[' -z ed6a73d9-35e2-48cd-b420-40f3e445d769 ']' 00:24:52.081 13:10:10 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:52.339 [2024-06-11 13:10:11.052203] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:52.339 [2024-06-11 13:10:11.052352] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:52.339 [2024-06-11 13:10:11.052543] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.339 [2024-06-11 13:10:11.052748] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:52.339 [2024-06-11 13:10:11.052887] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:24:52.339 13:10:11 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.339 13:10:11 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:52.597 13:10:11 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:52.597 13:10:11 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:52.597 13:10:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:52.597 13:10:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:52.855 13:10:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:52.855 13:10:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:52.855 13:10:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:52.855 13:10:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:53.114 13:10:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:53.114 13:10:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:53.372 13:10:12 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:53.373 13:10:12 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:53.631 13:10:12 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:53.631 13:10:12 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:53.631 13:10:12 -- common/autotest_common.sh@640 -- # local es=0 00:24:53.631 13:10:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:53.631 13:10:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.631 13:10:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.631 13:10:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.631 13:10:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.631 13:10:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.631 13:10:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.631 13:10:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.631 13:10:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:53.631 13:10:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:53.889 [2024-06-11 13:10:12.560464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:53.889 [2024-06-11 13:10:12.562666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:53.889 [2024-06-11 13:10:12.562876] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:53.889 [2024-06-11 13:10:12.562957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:53.889 [2024-06-11 13:10:12.563131] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:53.889 [2024-06-11 13:10:12.563348] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:53.889 [2024-06-11 13:10:12.563488] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:53.889 [2024-06-11 13:10:12.563581] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:24:53.889 [2024-06-11 13:10:12.563656] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:53.889 [2024-06-11 13:10:12.563804] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:24:53.889 request: 00:24:53.889 { 00:24:53.889 "name": "raid_bdev1", 00:24:53.889 "raid_level": "raid5f", 00:24:53.889 "base_bdevs": [ 00:24:53.889 "malloc1", 00:24:53.889 "malloc2", 00:24:53.889 "malloc3", 00:24:53.889 "malloc4" 00:24:53.889 ], 00:24:53.889 "superblock": false, 00:24:53.889 "strip_size_kb": 64, 00:24:53.889 "method": "bdev_raid_create", 00:24:53.889 "req_id": 1 00:24:53.889 } 00:24:53.889 Got JSON-RPC error response 00:24:53.889 response: 00:24:53.889 { 00:24:53.889 "code": -17, 00:24:53.889 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:53.889 } 00:24:53.889 13:10:12 -- common/autotest_common.sh@643 -- # es=1 00:24:53.889 13:10:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:53.889 13:10:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:53.889 13:10:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:53.889 13:10:12 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.889 13:10:12 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:54.147 13:10:12 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:54.147 13:10:12 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:54.147 13:10:12 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:54.405 [2024-06-11 13:10:12.988506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:54.405 [2024-06-11 13:10:12.988882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.405 [2024-06-11 13:10:12.988962] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:54.405 [2024-06-11 13:10:12.989292] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.405 [2024-06-11 13:10:12.991865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.405 [2024-06-11 13:10:12.992077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:54.405 [2024-06-11 13:10:12.992338] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:54.405 [2024-06-11 13:10:12.992538] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:54.405 pt1 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.405 13:10:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.664 13:10:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.664 "name": "raid_bdev1", 00:24:54.664 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:24:54.664 "strip_size_kb": 64, 00:24:54.664 "state": "configuring", 00:24:54.664 "raid_level": "raid5f", 00:24:54.664 "superblock": true, 00:24:54.664 "num_base_bdevs": 4, 00:24:54.664 "num_base_bdevs_discovered": 1, 00:24:54.664 "num_base_bdevs_operational": 4, 00:24:54.664 "base_bdevs_list": [ 00:24:54.664 { 00:24:54.664 "name": "pt1", 00:24:54.664 "uuid": "a96421df-73e5-5d6c-8553-34a17521decb", 00:24:54.664 "is_configured": true, 00:24:54.664 "data_offset": 2048, 00:24:54.664 "data_size": 63488 00:24:54.664 }, 00:24:54.664 { 00:24:54.664 "name": null, 00:24:54.664 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:24:54.664 "is_configured": false, 00:24:54.664 "data_offset": 2048, 00:24:54.664 "data_size": 63488 00:24:54.664 }, 00:24:54.664 { 00:24:54.664 "name": null, 00:24:54.664 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:24:54.664 "is_configured": false, 00:24:54.664 "data_offset": 2048, 00:24:54.664 "data_size": 63488 00:24:54.664 }, 00:24:54.664 { 00:24:54.664 "name": null, 00:24:54.664 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:24:54.664 "is_configured": false, 00:24:54.664 "data_offset": 2048, 00:24:54.664 "data_size": 63488 00:24:54.664 } 00:24:54.664 ] 00:24:54.664 }' 00:24:54.664 13:10:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.664 13:10:13 -- common/autotest_common.sh@10 -- # set +x 00:24:55.230 13:10:13 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:24:55.230 13:10:13 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:55.488 [2024-06-11 13:10:14.152994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:55.488 [2024-06-11 13:10:14.153220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.488 [2024-06-11 13:10:14.153297] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:55.488 [2024-06-11 13:10:14.153560] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.488 [2024-06-11 13:10:14.154054] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.488 [2024-06-11 13:10:14.154229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:55.488 [2024-06-11 13:10:14.154457] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:55.488 [2024-06-11 13:10:14.154584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:55.488 pt2 00:24:55.488 13:10:14 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:55.746 [2024-06-11 13:10:14.393091] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.746 13:10:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.007 13:10:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.007 "name": "raid_bdev1", 00:24:56.007 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:24:56.007 "strip_size_kb": 64, 00:24:56.007 "state": "configuring", 00:24:56.007 "raid_level": "raid5f", 00:24:56.007 "superblock": true, 00:24:56.007 "num_base_bdevs": 4, 00:24:56.007 "num_base_bdevs_discovered": 1, 00:24:56.007 "num_base_bdevs_operational": 4, 00:24:56.007 "base_bdevs_list": [ 00:24:56.007 { 00:24:56.007 "name": "pt1", 00:24:56.007 "uuid": "a96421df-73e5-5d6c-8553-34a17521decb", 00:24:56.007 "is_configured": true, 00:24:56.007 "data_offset": 2048, 00:24:56.007 "data_size": 63488 00:24:56.007 }, 00:24:56.007 { 00:24:56.007 "name": null, 00:24:56.007 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:24:56.007 "is_configured": false, 00:24:56.007 "data_offset": 2048, 00:24:56.007 "data_size": 63488 00:24:56.007 }, 00:24:56.007 { 00:24:56.007 "name": null, 00:24:56.007 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:24:56.007 "is_configured": false, 00:24:56.007 "data_offset": 2048, 00:24:56.007 "data_size": 63488 00:24:56.007 }, 00:24:56.007 { 00:24:56.007 "name": null, 00:24:56.007 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:24:56.007 "is_configured": false, 00:24:56.007 "data_offset": 2048, 00:24:56.007 "data_size": 63488 00:24:56.007 } 00:24:56.007 ] 00:24:56.007 }' 00:24:56.007 13:10:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.007 13:10:14 -- common/autotest_common.sh@10 -- # set +x 00:24:56.578 13:10:15 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:56.578 13:10:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:56.578 13:10:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:56.836 [2024-06-11 13:10:15.493326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:56.836 [2024-06-11 13:10:15.493576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.836 [2024-06-11 13:10:15.493647] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:56.836 [2024-06-11 13:10:15.493946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.836 [2024-06-11 13:10:15.494461] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.836 [2024-06-11 13:10:15.494636] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:56.836 [2024-06-11 13:10:15.494841] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:56.836 [2024-06-11 13:10:15.494977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:56.836 pt2 00:24:56.836 13:10:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:56.836 13:10:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:56.836 13:10:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:57.094 [2024-06-11 13:10:15.741357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:57.094 [2024-06-11 13:10:15.741599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.094 [2024-06-11 13:10:15.741661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:57.094 [2024-06-11 13:10:15.741778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.094 [2024-06-11 13:10:15.742212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.094 [2024-06-11 13:10:15.742393] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:57.094 [2024-06-11 13:10:15.742574] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:57.094 [2024-06-11 13:10:15.742693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:57.094 pt3 00:24:57.094 13:10:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:57.094 13:10:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:57.094 13:10:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:57.353 [2024-06-11 13:10:15.993468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:57.353 [2024-06-11 13:10:15.993737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.353 [2024-06-11 13:10:15.993886] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:57.353 [2024-06-11 13:10:15.994009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.353 [2024-06-11 13:10:15.994603] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.353 [2024-06-11 13:10:15.994794] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:57.353 [2024-06-11 13:10:15.995004] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:57.353 [2024-06-11 13:10:15.995130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:57.353 [2024-06-11 13:10:15.995393] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:24:57.353 [2024-06-11 13:10:15.995514] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:57.353 [2024-06-11 13:10:15.995661] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:57.353 [2024-06-11 13:10:16.001663] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:24:57.353 [2024-06-11 13:10:16.001815] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:24:57.353 [2024-06-11 13:10:16.002105] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.353 pt4 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.353 13:10:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.611 13:10:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.611 "name": "raid_bdev1", 00:24:57.611 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:24:57.611 "strip_size_kb": 64, 00:24:57.611 "state": "online", 00:24:57.611 "raid_level": "raid5f", 00:24:57.611 "superblock": true, 00:24:57.611 "num_base_bdevs": 4, 00:24:57.611 "num_base_bdevs_discovered": 4, 00:24:57.611 "num_base_bdevs_operational": 4, 00:24:57.611 "base_bdevs_list": [ 00:24:57.611 { 00:24:57.611 "name": "pt1", 00:24:57.611 "uuid": "a96421df-73e5-5d6c-8553-34a17521decb", 00:24:57.611 "is_configured": true, 00:24:57.611 "data_offset": 2048, 00:24:57.611 "data_size": 63488 00:24:57.611 }, 00:24:57.611 { 00:24:57.611 "name": "pt2", 00:24:57.611 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:24:57.611 "is_configured": true, 00:24:57.611 "data_offset": 2048, 00:24:57.611 "data_size": 63488 00:24:57.611 }, 00:24:57.611 { 00:24:57.611 "name": "pt3", 00:24:57.611 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:24:57.611 "is_configured": true, 00:24:57.611 "data_offset": 2048, 00:24:57.611 "data_size": 63488 00:24:57.611 }, 00:24:57.611 { 00:24:57.611 "name": "pt4", 00:24:57.611 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:24:57.611 "is_configured": true, 00:24:57.611 "data_offset": 2048, 00:24:57.611 "data_size": 63488 00:24:57.611 } 00:24:57.611 ] 00:24:57.611 }' 00:24:57.611 13:10:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.611 13:10:16 -- common/autotest_common.sh@10 -- # set +x 00:24:58.178 13:10:16 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:58.178 13:10:16 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:58.436 [2024-06-11 13:10:17.057586] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:58.436 13:10:17 -- bdev/bdev_raid.sh@430 -- # '[' ed6a73d9-35e2-48cd-b420-40f3e445d769 '!=' ed6a73d9-35e2-48cd-b420-40f3e445d769 ']' 00:24:58.436 13:10:17 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:24:58.436 13:10:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:58.436 13:10:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:58.436 13:10:17 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:58.695 [2024-06-11 13:10:17.309557] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:58.695 "name": "raid_bdev1", 00:24:58.695 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:24:58.695 "strip_size_kb": 64, 00:24:58.695 "state": "online", 00:24:58.695 "raid_level": "raid5f", 00:24:58.695 "superblock": true, 00:24:58.695 "num_base_bdevs": 4, 00:24:58.695 "num_base_bdevs_discovered": 3, 00:24:58.695 "num_base_bdevs_operational": 3, 00:24:58.695 "base_bdevs_list": [ 00:24:58.695 { 00:24:58.695 "name": null, 00:24:58.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.695 "is_configured": false, 00:24:58.695 "data_offset": 2048, 00:24:58.695 "data_size": 63488 00:24:58.695 }, 00:24:58.695 { 00:24:58.695 "name": "pt2", 00:24:58.695 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:24:58.695 "is_configured": true, 00:24:58.695 "data_offset": 2048, 00:24:58.695 "data_size": 63488 00:24:58.695 }, 00:24:58.695 { 00:24:58.695 "name": "pt3", 00:24:58.695 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:24:58.695 "is_configured": true, 00:24:58.695 "data_offset": 2048, 00:24:58.695 "data_size": 63488 00:24:58.695 }, 00:24:58.695 { 00:24:58.695 "name": "pt4", 00:24:58.695 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:24:58.695 "is_configured": true, 00:24:58.695 "data_offset": 2048, 00:24:58.695 "data_size": 63488 00:24:58.695 } 00:24:58.695 ] 00:24:58.695 }' 00:24:58.695 13:10:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:58.695 13:10:17 -- common/autotest_common.sh@10 -- # set +x 00:24:59.630 13:10:18 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:59.630 [2024-06-11 13:10:18.337719] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:59.630 [2024-06-11 13:10:18.337902] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:59.630 [2024-06-11 13:10:18.338070] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.630 [2024-06-11 13:10:18.338284] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.630 [2024-06-11 13:10:18.338393] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:24:59.630 13:10:18 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.630 13:10:18 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:24:59.888 13:10:18 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:24:59.888 13:10:18 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:24:59.888 13:10:18 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:24:59.888 13:10:18 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:59.888 13:10:18 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:00.147 13:10:18 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:00.147 13:10:18 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:00.147 13:10:18 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:00.147 13:10:18 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:00.147 13:10:18 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:00.147 13:10:18 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:00.405 13:10:19 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:00.405 13:10:19 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:00.405 13:10:19 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:00.405 13:10:19 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:00.405 13:10:19 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:00.663 [2024-06-11 13:10:19.333879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:00.663 [2024-06-11 13:10:19.334181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.663 [2024-06-11 13:10:19.334250] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:00.663 [2024-06-11 13:10:19.334378] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.663 [2024-06-11 13:10:19.337007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.663 [2024-06-11 13:10:19.337202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:00.663 [2024-06-11 13:10:19.337414] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:00.663 [2024-06-11 13:10:19.337592] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:00.663 pt2 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.663 13:10:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.922 13:10:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:00.922 "name": "raid_bdev1", 00:25:00.922 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:25:00.922 "strip_size_kb": 64, 00:25:00.922 "state": "configuring", 00:25:00.922 "raid_level": "raid5f", 00:25:00.922 "superblock": true, 00:25:00.922 "num_base_bdevs": 4, 00:25:00.922 "num_base_bdevs_discovered": 1, 00:25:00.922 "num_base_bdevs_operational": 3, 00:25:00.922 "base_bdevs_list": [ 00:25:00.922 { 00:25:00.922 "name": null, 00:25:00.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.922 "is_configured": false, 00:25:00.922 "data_offset": 2048, 00:25:00.922 "data_size": 63488 00:25:00.922 }, 00:25:00.922 { 00:25:00.922 "name": "pt2", 00:25:00.922 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:25:00.922 "is_configured": true, 00:25:00.922 "data_offset": 2048, 00:25:00.922 "data_size": 63488 00:25:00.922 }, 00:25:00.922 { 00:25:00.922 "name": null, 00:25:00.922 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:25:00.922 "is_configured": false, 00:25:00.922 "data_offset": 2048, 00:25:00.922 "data_size": 63488 00:25:00.922 }, 00:25:00.922 { 00:25:00.922 "name": null, 00:25:00.922 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:25:00.922 "is_configured": false, 00:25:00.922 "data_offset": 2048, 00:25:00.922 "data_size": 63488 00:25:00.922 } 00:25:00.922 ] 00:25:00.922 }' 00:25:00.922 13:10:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:00.922 13:10:19 -- common/autotest_common.sh@10 -- # set +x 00:25:01.488 13:10:20 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:01.488 13:10:20 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:01.488 13:10:20 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:01.747 [2024-06-11 13:10:20.422164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:01.747 [2024-06-11 13:10:20.422463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.747 [2024-06-11 13:10:20.422543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:01.747 [2024-06-11 13:10:20.422830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.747 [2024-06-11 13:10:20.423395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.747 [2024-06-11 13:10:20.423588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:01.747 [2024-06-11 13:10:20.423806] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:01.747 [2024-06-11 13:10:20.423939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:01.747 pt3 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.747 13:10:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.004 13:10:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:02.004 "name": "raid_bdev1", 00:25:02.004 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:25:02.004 "strip_size_kb": 64, 00:25:02.004 "state": "configuring", 00:25:02.004 "raid_level": "raid5f", 00:25:02.004 "superblock": true, 00:25:02.004 "num_base_bdevs": 4, 00:25:02.004 "num_base_bdevs_discovered": 2, 00:25:02.004 "num_base_bdevs_operational": 3, 00:25:02.004 "base_bdevs_list": [ 00:25:02.004 { 00:25:02.004 "name": null, 00:25:02.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.004 "is_configured": false, 00:25:02.004 "data_offset": 2048, 00:25:02.004 "data_size": 63488 00:25:02.004 }, 00:25:02.004 { 00:25:02.004 "name": "pt2", 00:25:02.004 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:25:02.004 "is_configured": true, 00:25:02.004 "data_offset": 2048, 00:25:02.004 "data_size": 63488 00:25:02.004 }, 00:25:02.004 { 00:25:02.004 "name": "pt3", 00:25:02.004 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:25:02.004 "is_configured": true, 00:25:02.004 "data_offset": 2048, 00:25:02.004 "data_size": 63488 00:25:02.004 }, 00:25:02.004 { 00:25:02.004 "name": null, 00:25:02.004 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:25:02.004 "is_configured": false, 00:25:02.004 "data_offset": 2048, 00:25:02.004 "data_size": 63488 00:25:02.004 } 00:25:02.004 ] 00:25:02.004 }' 00:25:02.004 13:10:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:02.004 13:10:20 -- common/autotest_common.sh@10 -- # set +x 00:25:02.570 13:10:21 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:02.570 13:10:21 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:02.570 13:10:21 -- bdev/bdev_raid.sh@462 -- # i=3 00:25:02.570 13:10:21 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:02.828 [2024-06-11 13:10:21.530497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:02.828 [2024-06-11 13:10:21.530732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.828 [2024-06-11 13:10:21.530877] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:02.828 [2024-06-11 13:10:21.530990] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.828 [2024-06-11 13:10:21.531610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.828 [2024-06-11 13:10:21.531761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:02.828 [2024-06-11 13:10:21.531967] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:02.828 [2024-06-11 13:10:21.532097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:02.828 [2024-06-11 13:10:21.532345] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:25:02.828 [2024-06-11 13:10:21.532457] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:02.828 [2024-06-11 13:10:21.532768] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:02.828 [2024-06-11 13:10:21.538600] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:25:02.828 [2024-06-11 13:10:21.538747] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:25:02.828 [2024-06-11 13:10:21.539094] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.828 pt4 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.828 13:10:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.086 13:10:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:03.086 "name": "raid_bdev1", 00:25:03.086 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:25:03.086 "strip_size_kb": 64, 00:25:03.086 "state": "online", 00:25:03.086 "raid_level": "raid5f", 00:25:03.086 "superblock": true, 00:25:03.086 "num_base_bdevs": 4, 00:25:03.086 "num_base_bdevs_discovered": 3, 00:25:03.086 "num_base_bdevs_operational": 3, 00:25:03.086 "base_bdevs_list": [ 00:25:03.086 { 00:25:03.086 "name": null, 00:25:03.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.086 "is_configured": false, 00:25:03.086 "data_offset": 2048, 00:25:03.086 "data_size": 63488 00:25:03.086 }, 00:25:03.086 { 00:25:03.086 "name": "pt2", 00:25:03.086 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:25:03.086 "is_configured": true, 00:25:03.086 "data_offset": 2048, 00:25:03.086 "data_size": 63488 00:25:03.086 }, 00:25:03.086 { 00:25:03.086 "name": "pt3", 00:25:03.086 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:25:03.086 "is_configured": true, 00:25:03.086 "data_offset": 2048, 00:25:03.086 "data_size": 63488 00:25:03.086 }, 00:25:03.086 { 00:25:03.086 "name": "pt4", 00:25:03.086 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:25:03.086 "is_configured": true, 00:25:03.086 "data_offset": 2048, 00:25:03.086 "data_size": 63488 00:25:03.086 } 00:25:03.086 ] 00:25:03.086 }' 00:25:03.086 13:10:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:03.086 13:10:21 -- common/autotest_common.sh@10 -- # set +x 00:25:03.652 13:10:22 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:25:03.652 13:10:22 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:03.911 [2024-06-11 13:10:22.577873] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.911 [2024-06-11 13:10:22.578058] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.911 [2024-06-11 13:10:22.578246] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.911 [2024-06-11 13:10:22.578413] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.911 [2024-06-11 13:10:22.578509] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:25:03.911 13:10:22 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.911 13:10:22 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:25:04.169 13:10:22 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:25:04.169 13:10:22 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:25:04.169 13:10:22 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:04.427 [2024-06-11 13:10:23.026005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:04.427 [2024-06-11 13:10:23.026292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.427 [2024-06-11 13:10:23.026368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:04.427 [2024-06-11 13:10:23.026623] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.427 [2024-06-11 13:10:23.028885] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.427 [2024-06-11 13:10:23.029097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:04.427 [2024-06-11 13:10:23.029328] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:04.427 [2024-06-11 13:10:23.029488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:04.427 pt1 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:04.427 "name": "raid_bdev1", 00:25:04.427 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:25:04.427 "strip_size_kb": 64, 00:25:04.427 "state": "configuring", 00:25:04.427 "raid_level": "raid5f", 00:25:04.427 "superblock": true, 00:25:04.427 "num_base_bdevs": 4, 00:25:04.427 "num_base_bdevs_discovered": 1, 00:25:04.427 "num_base_bdevs_operational": 4, 00:25:04.427 "base_bdevs_list": [ 00:25:04.427 { 00:25:04.427 "name": "pt1", 00:25:04.427 "uuid": "a96421df-73e5-5d6c-8553-34a17521decb", 00:25:04.427 "is_configured": true, 00:25:04.427 "data_offset": 2048, 00:25:04.427 "data_size": 63488 00:25:04.427 }, 00:25:04.427 { 00:25:04.427 "name": null, 00:25:04.427 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:25:04.427 "is_configured": false, 00:25:04.427 "data_offset": 2048, 00:25:04.427 "data_size": 63488 00:25:04.427 }, 00:25:04.427 { 00:25:04.427 "name": null, 00:25:04.427 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:25:04.427 "is_configured": false, 00:25:04.427 "data_offset": 2048, 00:25:04.427 "data_size": 63488 00:25:04.427 }, 00:25:04.427 { 00:25:04.427 "name": null, 00:25:04.427 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:25:04.427 "is_configured": false, 00:25:04.427 "data_offset": 2048, 00:25:04.427 "data_size": 63488 00:25:04.427 } 00:25:04.427 ] 00:25:04.427 }' 00:25:04.427 13:10:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:04.427 13:10:23 -- common/autotest_common.sh@10 -- # set +x 00:25:05.361 13:10:23 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:25:05.361 13:10:23 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:05.361 13:10:23 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:05.361 13:10:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:05.361 13:10:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:05.361 13:10:24 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:05.619 13:10:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:05.619 13:10:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:05.619 13:10:24 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:05.878 13:10:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:05.878 13:10:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:05.878 13:10:24 -- bdev/bdev_raid.sh@489 -- # i=3 00:25:05.878 13:10:24 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:06.136 [2024-06-11 13:10:24.750346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:06.136 [2024-06-11 13:10:24.750562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.136 [2024-06-11 13:10:24.750736] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:25:06.136 [2024-06-11 13:10:24.750862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.136 [2024-06-11 13:10:24.751384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.136 [2024-06-11 13:10:24.751534] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:06.136 [2024-06-11 13:10:24.751716] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:06.136 [2024-06-11 13:10:24.751836] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:06.136 [2024-06-11 13:10:24.751924] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:06.136 [2024-06-11 13:10:24.751973] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:25:06.136 [2024-06-11 13:10:24.752120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:06.136 pt4 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.136 13:10:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.395 13:10:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:06.395 "name": "raid_bdev1", 00:25:06.395 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:25:06.395 "strip_size_kb": 64, 00:25:06.395 "state": "configuring", 00:25:06.395 "raid_level": "raid5f", 00:25:06.395 "superblock": true, 00:25:06.395 "num_base_bdevs": 4, 00:25:06.395 "num_base_bdevs_discovered": 1, 00:25:06.395 "num_base_bdevs_operational": 3, 00:25:06.395 "base_bdevs_list": [ 00:25:06.395 { 00:25:06.395 "name": null, 00:25:06.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.395 "is_configured": false, 00:25:06.395 "data_offset": 2048, 00:25:06.395 "data_size": 63488 00:25:06.395 }, 00:25:06.395 { 00:25:06.395 "name": null, 00:25:06.395 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:25:06.395 "is_configured": false, 00:25:06.395 "data_offset": 2048, 00:25:06.395 "data_size": 63488 00:25:06.395 }, 00:25:06.395 { 00:25:06.395 "name": null, 00:25:06.395 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:25:06.395 "is_configured": false, 00:25:06.395 "data_offset": 2048, 00:25:06.395 "data_size": 63488 00:25:06.395 }, 00:25:06.395 { 00:25:06.395 "name": "pt4", 00:25:06.395 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:25:06.395 "is_configured": true, 00:25:06.395 "data_offset": 2048, 00:25:06.395 "data_size": 63488 00:25:06.395 } 00:25:06.395 ] 00:25:06.395 }' 00:25:06.395 13:10:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.395 13:10:25 -- common/autotest_common.sh@10 -- # set +x 00:25:06.960 13:10:25 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:25:06.960 13:10:25 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:06.960 13:10:25 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:07.231 [2024-06-11 13:10:25.854617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:07.231 [2024-06-11 13:10:25.854960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.231 [2024-06-11 13:10:25.855118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:25:07.231 [2024-06-11 13:10:25.855239] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.231 [2024-06-11 13:10:25.855814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.231 [2024-06-11 13:10:25.855987] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:07.231 [2024-06-11 13:10:25.856185] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:07.231 [2024-06-11 13:10:25.856316] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:07.231 pt2 00:25:07.231 13:10:25 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:07.231 13:10:25 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:07.231 13:10:25 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:07.503 [2024-06-11 13:10:26.118674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:07.503 [2024-06-11 13:10:26.118931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.503 [2024-06-11 13:10:26.119074] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:25:07.503 [2024-06-11 13:10:26.119191] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.503 [2024-06-11 13:10:26.119752] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.503 [2024-06-11 13:10:26.119951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:07.503 [2024-06-11 13:10:26.120169] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:07.503 [2024-06-11 13:10:26.120285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:07.503 [2024-06-11 13:10:26.120588] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:25:07.503 [2024-06-11 13:10:26.120710] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:07.503 [2024-06-11 13:10:26.120979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:07.503 [2024-06-11 13:10:26.126981] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:25:07.503 [2024-06-11 13:10:26.127130] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:25:07.503 [2024-06-11 13:10:26.127456] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.503 pt3 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.503 "name": "raid_bdev1", 00:25:07.503 "uuid": "ed6a73d9-35e2-48cd-b420-40f3e445d769", 00:25:07.503 "strip_size_kb": 64, 00:25:07.503 "state": "online", 00:25:07.503 "raid_level": "raid5f", 00:25:07.503 "superblock": true, 00:25:07.503 "num_base_bdevs": 4, 00:25:07.503 "num_base_bdevs_discovered": 3, 00:25:07.503 "num_base_bdevs_operational": 3, 00:25:07.503 "base_bdevs_list": [ 00:25:07.503 { 00:25:07.503 "name": null, 00:25:07.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.503 "is_configured": false, 00:25:07.503 "data_offset": 2048, 00:25:07.503 "data_size": 63488 00:25:07.503 }, 00:25:07.503 { 00:25:07.503 "name": "pt2", 00:25:07.503 "uuid": "d124823a-8195-5a23-8f88-3c9c77bbc808", 00:25:07.503 "is_configured": true, 00:25:07.503 "data_offset": 2048, 00:25:07.503 "data_size": 63488 00:25:07.503 }, 00:25:07.503 { 00:25:07.503 "name": "pt3", 00:25:07.503 "uuid": "c4d0c390-fb03-5648-bb58-facb49dda21f", 00:25:07.503 "is_configured": true, 00:25:07.503 "data_offset": 2048, 00:25:07.503 "data_size": 63488 00:25:07.503 }, 00:25:07.503 { 00:25:07.503 "name": "pt4", 00:25:07.503 "uuid": "679aa4c0-228a-50f9-a023-8926b09e1892", 00:25:07.503 "is_configured": true, 00:25:07.503 "data_offset": 2048, 00:25:07.503 "data_size": 63488 00:25:07.503 } 00:25:07.503 ] 00:25:07.503 }' 00:25:07.503 13:10:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.503 13:10:26 -- common/autotest_common.sh@10 -- # set +x 00:25:08.436 13:10:26 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:08.436 13:10:26 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:08.436 [2024-06-11 13:10:27.166694] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.436 13:10:27 -- bdev/bdev_raid.sh@506 -- # '[' ed6a73d9-35e2-48cd-b420-40f3e445d769 '!=' ed6a73d9-35e2-48cd-b420-40f3e445d769 ']' 00:25:08.436 13:10:27 -- bdev/bdev_raid.sh@511 -- # killprocess 134164 00:25:08.436 13:10:27 -- common/autotest_common.sh@926 -- # '[' -z 134164 ']' 00:25:08.436 13:10:27 -- common/autotest_common.sh@930 -- # kill -0 134164 00:25:08.436 13:10:27 -- common/autotest_common.sh@931 -- # uname 00:25:08.436 13:10:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:08.436 13:10:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134164 00:25:08.436 13:10:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:08.436 13:10:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:08.436 13:10:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134164' 00:25:08.436 killing process with pid 134164 00:25:08.436 13:10:27 -- common/autotest_common.sh@945 -- # kill 134164 00:25:08.436 13:10:27 -- common/autotest_common.sh@950 -- # wait 134164 00:25:08.436 [2024-06-11 13:10:27.217222] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:08.436 [2024-06-11 13:10:27.217304] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.436 [2024-06-11 13:10:27.217394] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.436 [2024-06-11 13:10:27.217472] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:25:08.693 [2024-06-11 13:10:27.510699] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:09.630 ************************************ 00:25:09.630 END TEST raid5f_superblock_test 00:25:09.630 ************************************ 00:25:09.630 13:10:28 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:09.630 00:25:09.630 real 0m21.699s 00:25:09.630 user 0m40.235s 00:25:09.630 sys 0m2.358s 00:25:09.630 13:10:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.630 13:10:28 -- common/autotest_common.sh@10 -- # set +x 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:25:09.889 13:10:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:09.889 13:10:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:09.889 13:10:28 -- common/autotest_common.sh@10 -- # set +x 00:25:09.889 ************************************ 00:25:09.889 START TEST raid5f_rebuild_test 00:25:09.889 ************************************ 00:25:09.889 13:10:28 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:09.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@544 -- # raid_pid=134869 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134869 /var/tmp/spdk-raid.sock 00:25:09.889 13:10:28 -- common/autotest_common.sh@819 -- # '[' -z 134869 ']' 00:25:09.889 13:10:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:09.889 13:10:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:09.889 13:10:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:09.889 13:10:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:09.889 13:10:28 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:09.889 13:10:28 -- common/autotest_common.sh@10 -- # set +x 00:25:09.889 [2024-06-11 13:10:28.578078] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:09.889 [2024-06-11 13:10:28.578536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134869 ] 00:25:09.889 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:09.889 Zero copy mechanism will not be used. 00:25:10.148 [2024-06-11 13:10:28.753662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.148 [2024-06-11 13:10:28.979123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.406 [2024-06-11 13:10:29.152823] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:10.664 13:10:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:10.664 13:10:29 -- common/autotest_common.sh@852 -- # return 0 00:25:10.664 13:10:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:10.664 13:10:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:10.664 13:10:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:10.922 BaseBdev1 00:25:10.922 13:10:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:10.922 13:10:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:10.923 13:10:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:11.181 BaseBdev2 00:25:11.181 13:10:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:11.181 13:10:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:11.181 13:10:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:11.438 BaseBdev3 00:25:11.438 13:10:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:11.438 13:10:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:11.438 13:10:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:11.696 BaseBdev4 00:25:11.696 13:10:30 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:11.954 spare_malloc 00:25:11.954 13:10:30 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:12.212 spare_delay 00:25:12.212 13:10:30 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:12.469 [2024-06-11 13:10:31.074449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:12.469 [2024-06-11 13:10:31.074677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.469 [2024-06-11 13:10:31.074810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:12.469 [2024-06-11 13:10:31.074938] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.469 [2024-06-11 13:10:31.077190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.469 [2024-06-11 13:10:31.077347] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:12.469 spare 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:12.469 [2024-06-11 13:10:31.262511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:12.469 [2024-06-11 13:10:31.264452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:12.469 [2024-06-11 13:10:31.264654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:12.469 [2024-06-11 13:10:31.264729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:12.469 [2024-06-11 13:10:31.264921] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:25:12.469 [2024-06-11 13:10:31.265040] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:12.469 [2024-06-11 13:10:31.265207] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:12.469 [2024-06-11 13:10:31.271178] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:25:12.469 [2024-06-11 13:10:31.271307] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:25:12.469 [2024-06-11 13:10:31.271600] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.469 13:10:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.726 13:10:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.726 "name": "raid_bdev1", 00:25:12.726 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:12.726 "strip_size_kb": 64, 00:25:12.726 "state": "online", 00:25:12.726 "raid_level": "raid5f", 00:25:12.726 "superblock": false, 00:25:12.726 "num_base_bdevs": 4, 00:25:12.726 "num_base_bdevs_discovered": 4, 00:25:12.726 "num_base_bdevs_operational": 4, 00:25:12.726 "base_bdevs_list": [ 00:25:12.726 { 00:25:12.726 "name": "BaseBdev1", 00:25:12.726 "uuid": "e3de6944-f1e9-41ad-9c34-ca1d6557e44a", 00:25:12.726 "is_configured": true, 00:25:12.726 "data_offset": 0, 00:25:12.726 "data_size": 65536 00:25:12.726 }, 00:25:12.726 { 00:25:12.726 "name": "BaseBdev2", 00:25:12.726 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:12.726 "is_configured": true, 00:25:12.726 "data_offset": 0, 00:25:12.726 "data_size": 65536 00:25:12.726 }, 00:25:12.726 { 00:25:12.726 "name": "BaseBdev3", 00:25:12.726 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:12.726 "is_configured": true, 00:25:12.726 "data_offset": 0, 00:25:12.726 "data_size": 65536 00:25:12.726 }, 00:25:12.726 { 00:25:12.726 "name": "BaseBdev4", 00:25:12.726 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:12.726 "is_configured": true, 00:25:12.726 "data_offset": 0, 00:25:12.726 "data_size": 65536 00:25:12.726 } 00:25:12.726 ] 00:25:12.726 }' 00:25:12.726 13:10:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.726 13:10:31 -- common/autotest_common.sh@10 -- # set +x 00:25:13.658 13:10:32 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:13.658 13:10:32 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:13.658 [2024-06-11 13:10:32.382537] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.658 13:10:32 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:25:13.658 13:10:32 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.658 13:10:32 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:13.916 13:10:32 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:13.916 13:10:32 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:13.916 13:10:32 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:13.916 13:10:32 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@12 -- # local i 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:13.916 13:10:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:14.175 [2024-06-11 13:10:32.902309] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:14.175 /dev/nbd0 00:25:14.175 13:10:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:14.175 13:10:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:14.175 13:10:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:14.175 13:10:32 -- common/autotest_common.sh@857 -- # local i 00:25:14.175 13:10:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:14.175 13:10:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:14.175 13:10:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:14.175 13:10:32 -- common/autotest_common.sh@861 -- # break 00:25:14.175 13:10:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:14.175 13:10:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:14.175 13:10:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:14.175 1+0 records in 00:25:14.175 1+0 records out 00:25:14.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495301 s, 8.3 MB/s 00:25:14.175 13:10:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:14.175 13:10:32 -- common/autotest_common.sh@874 -- # size=4096 00:25:14.175 13:10:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:14.175 13:10:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:14.175 13:10:32 -- common/autotest_common.sh@877 -- # return 0 00:25:14.175 13:10:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:14.175 13:10:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:14.175 13:10:32 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:14.175 13:10:32 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:14.175 13:10:32 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:14.175 13:10:32 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:14.740 512+0 records in 00:25:14.740 512+0 records out 00:25:14.740 100663296 bytes (101 MB, 96 MiB) copied, 0.488893 s, 206 MB/s 00:25:14.740 13:10:33 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:14.740 13:10:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:14.741 13:10:33 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:14.741 13:10:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:14.741 13:10:33 -- bdev/nbd_common.sh@51 -- # local i 00:25:14.741 13:10:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:14.741 13:10:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:14.998 [2024-06-11 13:10:33.713331] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@41 -- # break 00:25:14.998 13:10:33 -- bdev/nbd_common.sh@45 -- # return 0 00:25:14.998 13:10:33 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:15.257 [2024-06-11 13:10:33.993156] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.257 13:10:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.515 13:10:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:15.515 "name": "raid_bdev1", 00:25:15.515 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:15.515 "strip_size_kb": 64, 00:25:15.515 "state": "online", 00:25:15.515 "raid_level": "raid5f", 00:25:15.515 "superblock": false, 00:25:15.515 "num_base_bdevs": 4, 00:25:15.515 "num_base_bdevs_discovered": 3, 00:25:15.515 "num_base_bdevs_operational": 3, 00:25:15.515 "base_bdevs_list": [ 00:25:15.515 { 00:25:15.515 "name": null, 00:25:15.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.515 "is_configured": false, 00:25:15.515 "data_offset": 0, 00:25:15.515 "data_size": 65536 00:25:15.515 }, 00:25:15.515 { 00:25:15.515 "name": "BaseBdev2", 00:25:15.515 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:15.515 "is_configured": true, 00:25:15.515 "data_offset": 0, 00:25:15.515 "data_size": 65536 00:25:15.515 }, 00:25:15.515 { 00:25:15.515 "name": "BaseBdev3", 00:25:15.515 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:15.515 "is_configured": true, 00:25:15.515 "data_offset": 0, 00:25:15.515 "data_size": 65536 00:25:15.515 }, 00:25:15.515 { 00:25:15.515 "name": "BaseBdev4", 00:25:15.515 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:15.515 "is_configured": true, 00:25:15.515 "data_offset": 0, 00:25:15.515 "data_size": 65536 00:25:15.515 } 00:25:15.515 ] 00:25:15.515 }' 00:25:15.515 13:10:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:15.515 13:10:34 -- common/autotest_common.sh@10 -- # set +x 00:25:16.080 13:10:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:16.338 [2024-06-11 13:10:35.149386] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:16.338 [2024-06-11 13:10:35.149652] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:16.338 [2024-06-11 13:10:35.160978] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d220 00:25:16.338 13:10:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:16.338 [2024-06-11 13:10:35.178490] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:17.713 "name": "raid_bdev1", 00:25:17.713 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:17.713 "strip_size_kb": 64, 00:25:17.713 "state": "online", 00:25:17.713 "raid_level": "raid5f", 00:25:17.713 "superblock": false, 00:25:17.713 "num_base_bdevs": 4, 00:25:17.713 "num_base_bdevs_discovered": 4, 00:25:17.713 "num_base_bdevs_operational": 4, 00:25:17.713 "process": { 00:25:17.713 "type": "rebuild", 00:25:17.713 "target": "spare", 00:25:17.713 "progress": { 00:25:17.713 "blocks": 23040, 00:25:17.713 "percent": 11 00:25:17.713 } 00:25:17.713 }, 00:25:17.713 "base_bdevs_list": [ 00:25:17.713 { 00:25:17.713 "name": "spare", 00:25:17.713 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:17.713 "is_configured": true, 00:25:17.713 "data_offset": 0, 00:25:17.713 "data_size": 65536 00:25:17.713 }, 00:25:17.713 { 00:25:17.713 "name": "BaseBdev2", 00:25:17.713 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:17.713 "is_configured": true, 00:25:17.713 "data_offset": 0, 00:25:17.713 "data_size": 65536 00:25:17.713 }, 00:25:17.713 { 00:25:17.713 "name": "BaseBdev3", 00:25:17.713 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:17.713 "is_configured": true, 00:25:17.713 "data_offset": 0, 00:25:17.713 "data_size": 65536 00:25:17.713 }, 00:25:17.713 { 00:25:17.713 "name": "BaseBdev4", 00:25:17.713 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:17.713 "is_configured": true, 00:25:17.713 "data_offset": 0, 00:25:17.713 "data_size": 65536 00:25:17.713 } 00:25:17.713 ] 00:25:17.713 }' 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:17.713 13:10:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:17.972 [2024-06-11 13:10:36.751961] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:17.972 [2024-06-11 13:10:36.790088] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:17.972 [2024-06-11 13:10:36.790361] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.240 13:10:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.516 13:10:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.516 "name": "raid_bdev1", 00:25:18.516 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:18.516 "strip_size_kb": 64, 00:25:18.516 "state": "online", 00:25:18.516 "raid_level": "raid5f", 00:25:18.516 "superblock": false, 00:25:18.516 "num_base_bdevs": 4, 00:25:18.516 "num_base_bdevs_discovered": 3, 00:25:18.516 "num_base_bdevs_operational": 3, 00:25:18.516 "base_bdevs_list": [ 00:25:18.516 { 00:25:18.516 "name": null, 00:25:18.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.516 "is_configured": false, 00:25:18.516 "data_offset": 0, 00:25:18.516 "data_size": 65536 00:25:18.516 }, 00:25:18.516 { 00:25:18.516 "name": "BaseBdev2", 00:25:18.516 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:18.516 "is_configured": true, 00:25:18.516 "data_offset": 0, 00:25:18.516 "data_size": 65536 00:25:18.516 }, 00:25:18.516 { 00:25:18.516 "name": "BaseBdev3", 00:25:18.516 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:18.516 "is_configured": true, 00:25:18.516 "data_offset": 0, 00:25:18.516 "data_size": 65536 00:25:18.516 }, 00:25:18.516 { 00:25:18.516 "name": "BaseBdev4", 00:25:18.516 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:18.516 "is_configured": true, 00:25:18.516 "data_offset": 0, 00:25:18.516 "data_size": 65536 00:25:18.516 } 00:25:18.516 ] 00:25:18.516 }' 00:25:18.516 13:10:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.516 13:10:37 -- common/autotest_common.sh@10 -- # set +x 00:25:19.083 13:10:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:19.083 13:10:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:19.083 13:10:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:19.083 13:10:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:19.083 13:10:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:19.083 13:10:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.083 13:10:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.341 13:10:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.341 "name": "raid_bdev1", 00:25:19.341 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:19.341 "strip_size_kb": 64, 00:25:19.341 "state": "online", 00:25:19.341 "raid_level": "raid5f", 00:25:19.341 "superblock": false, 00:25:19.341 "num_base_bdevs": 4, 00:25:19.341 "num_base_bdevs_discovered": 3, 00:25:19.341 "num_base_bdevs_operational": 3, 00:25:19.341 "base_bdevs_list": [ 00:25:19.341 { 00:25:19.341 "name": null, 00:25:19.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.341 "is_configured": false, 00:25:19.341 "data_offset": 0, 00:25:19.341 "data_size": 65536 00:25:19.341 }, 00:25:19.341 { 00:25:19.341 "name": "BaseBdev2", 00:25:19.341 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:19.341 "is_configured": true, 00:25:19.341 "data_offset": 0, 00:25:19.341 "data_size": 65536 00:25:19.341 }, 00:25:19.341 { 00:25:19.341 "name": "BaseBdev3", 00:25:19.341 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:19.341 "is_configured": true, 00:25:19.341 "data_offset": 0, 00:25:19.341 "data_size": 65536 00:25:19.341 }, 00:25:19.341 { 00:25:19.341 "name": "BaseBdev4", 00:25:19.341 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:19.341 "is_configured": true, 00:25:19.341 "data_offset": 0, 00:25:19.341 "data_size": 65536 00:25:19.341 } 00:25:19.341 ] 00:25:19.341 }' 00:25:19.341 13:10:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.341 13:10:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:19.341 13:10:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.341 13:10:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:19.341 13:10:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:19.601 [2024-06-11 13:10:38.303172] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:19.601 [2024-06-11 13:10:38.303390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:19.601 [2024-06-11 13:10:38.313778] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d3c0 00:25:19.601 [2024-06-11 13:10:38.320908] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:19.601 13:10:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:20.533 13:10:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.533 13:10:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:20.533 13:10:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:20.533 13:10:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:20.533 13:10:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:20.533 13:10:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.533 13:10:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.791 13:10:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:20.791 "name": "raid_bdev1", 00:25:20.791 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:20.791 "strip_size_kb": 64, 00:25:20.791 "state": "online", 00:25:20.791 "raid_level": "raid5f", 00:25:20.791 "superblock": false, 00:25:20.791 "num_base_bdevs": 4, 00:25:20.791 "num_base_bdevs_discovered": 4, 00:25:20.791 "num_base_bdevs_operational": 4, 00:25:20.791 "process": { 00:25:20.791 "type": "rebuild", 00:25:20.791 "target": "spare", 00:25:20.791 "progress": { 00:25:20.791 "blocks": 23040, 00:25:20.791 "percent": 11 00:25:20.791 } 00:25:20.791 }, 00:25:20.791 "base_bdevs_list": [ 00:25:20.791 { 00:25:20.791 "name": "spare", 00:25:20.791 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:20.791 "is_configured": true, 00:25:20.791 "data_offset": 0, 00:25:20.791 "data_size": 65536 00:25:20.791 }, 00:25:20.791 { 00:25:20.791 "name": "BaseBdev2", 00:25:20.791 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:20.791 "is_configured": true, 00:25:20.791 "data_offset": 0, 00:25:20.791 "data_size": 65536 00:25:20.791 }, 00:25:20.791 { 00:25:20.791 "name": "BaseBdev3", 00:25:20.791 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:20.791 "is_configured": true, 00:25:20.791 "data_offset": 0, 00:25:20.791 "data_size": 65536 00:25:20.791 }, 00:25:20.791 { 00:25:20.791 "name": "BaseBdev4", 00:25:20.791 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:20.791 "is_configured": true, 00:25:20.791 "data_offset": 0, 00:25:20.791 "data_size": 65536 00:25:20.791 } 00:25:20.791 ] 00:25:20.791 }' 00:25:20.791 13:10:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:20.791 13:10:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.791 13:10:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:21.049 13:10:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@657 -- # local timeout=711 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:21.050 "name": "raid_bdev1", 00:25:21.050 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:21.050 "strip_size_kb": 64, 00:25:21.050 "state": "online", 00:25:21.050 "raid_level": "raid5f", 00:25:21.050 "superblock": false, 00:25:21.050 "num_base_bdevs": 4, 00:25:21.050 "num_base_bdevs_discovered": 4, 00:25:21.050 "num_base_bdevs_operational": 4, 00:25:21.050 "process": { 00:25:21.050 "type": "rebuild", 00:25:21.050 "target": "spare", 00:25:21.050 "progress": { 00:25:21.050 "blocks": 28800, 00:25:21.050 "percent": 14 00:25:21.050 } 00:25:21.050 }, 00:25:21.050 "base_bdevs_list": [ 00:25:21.050 { 00:25:21.050 "name": "spare", 00:25:21.050 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:21.050 "is_configured": true, 00:25:21.050 "data_offset": 0, 00:25:21.050 "data_size": 65536 00:25:21.050 }, 00:25:21.050 { 00:25:21.050 "name": "BaseBdev2", 00:25:21.050 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:21.050 "is_configured": true, 00:25:21.050 "data_offset": 0, 00:25:21.050 "data_size": 65536 00:25:21.050 }, 00:25:21.050 { 00:25:21.050 "name": "BaseBdev3", 00:25:21.050 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:21.050 "is_configured": true, 00:25:21.050 "data_offset": 0, 00:25:21.050 "data_size": 65536 00:25:21.050 }, 00:25:21.050 { 00:25:21.050 "name": "BaseBdev4", 00:25:21.050 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:21.050 "is_configured": true, 00:25:21.050 "data_offset": 0, 00:25:21.050 "data_size": 65536 00:25:21.050 } 00:25:21.050 ] 00:25:21.050 }' 00:25:21.050 13:10:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:21.308 13:10:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.308 13:10:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:21.308 13:10:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.308 13:10:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:22.241 13:10:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:22.241 13:10:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:22.241 13:10:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:22.241 13:10:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:22.241 13:10:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:22.241 13:10:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:22.241 13:10:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.241 13:10:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.499 13:10:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:22.499 "name": "raid_bdev1", 00:25:22.499 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:22.499 "strip_size_kb": 64, 00:25:22.499 "state": "online", 00:25:22.499 "raid_level": "raid5f", 00:25:22.499 "superblock": false, 00:25:22.499 "num_base_bdevs": 4, 00:25:22.499 "num_base_bdevs_discovered": 4, 00:25:22.499 "num_base_bdevs_operational": 4, 00:25:22.499 "process": { 00:25:22.499 "type": "rebuild", 00:25:22.499 "target": "spare", 00:25:22.499 "progress": { 00:25:22.499 "blocks": 53760, 00:25:22.499 "percent": 27 00:25:22.499 } 00:25:22.499 }, 00:25:22.499 "base_bdevs_list": [ 00:25:22.499 { 00:25:22.499 "name": "spare", 00:25:22.499 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:22.499 "is_configured": true, 00:25:22.499 "data_offset": 0, 00:25:22.499 "data_size": 65536 00:25:22.499 }, 00:25:22.499 { 00:25:22.499 "name": "BaseBdev2", 00:25:22.499 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:22.499 "is_configured": true, 00:25:22.499 "data_offset": 0, 00:25:22.499 "data_size": 65536 00:25:22.499 }, 00:25:22.499 { 00:25:22.499 "name": "BaseBdev3", 00:25:22.499 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:22.499 "is_configured": true, 00:25:22.499 "data_offset": 0, 00:25:22.499 "data_size": 65536 00:25:22.499 }, 00:25:22.499 { 00:25:22.499 "name": "BaseBdev4", 00:25:22.499 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:22.499 "is_configured": true, 00:25:22.499 "data_offset": 0, 00:25:22.499 "data_size": 65536 00:25:22.499 } 00:25:22.499 ] 00:25:22.499 }' 00:25:22.499 13:10:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:22.499 13:10:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:22.499 13:10:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:22.499 13:10:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:22.499 13:10:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.874 13:10:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:23.874 "name": "raid_bdev1", 00:25:23.874 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:23.874 "strip_size_kb": 64, 00:25:23.874 "state": "online", 00:25:23.874 "raid_level": "raid5f", 00:25:23.874 "superblock": false, 00:25:23.874 "num_base_bdevs": 4, 00:25:23.874 "num_base_bdevs_discovered": 4, 00:25:23.874 "num_base_bdevs_operational": 4, 00:25:23.874 "process": { 00:25:23.875 "type": "rebuild", 00:25:23.875 "target": "spare", 00:25:23.875 "progress": { 00:25:23.875 "blocks": 78720, 00:25:23.875 "percent": 40 00:25:23.875 } 00:25:23.875 }, 00:25:23.875 "base_bdevs_list": [ 00:25:23.875 { 00:25:23.875 "name": "spare", 00:25:23.875 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:23.875 "is_configured": true, 00:25:23.875 "data_offset": 0, 00:25:23.875 "data_size": 65536 00:25:23.875 }, 00:25:23.875 { 00:25:23.875 "name": "BaseBdev2", 00:25:23.875 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:23.875 "is_configured": true, 00:25:23.875 "data_offset": 0, 00:25:23.875 "data_size": 65536 00:25:23.875 }, 00:25:23.875 { 00:25:23.875 "name": "BaseBdev3", 00:25:23.875 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:23.875 "is_configured": true, 00:25:23.875 "data_offset": 0, 00:25:23.875 "data_size": 65536 00:25:23.875 }, 00:25:23.875 { 00:25:23.875 "name": "BaseBdev4", 00:25:23.875 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:23.875 "is_configured": true, 00:25:23.875 "data_offset": 0, 00:25:23.875 "data_size": 65536 00:25:23.875 } 00:25:23.875 ] 00:25:23.875 }' 00:25:23.875 13:10:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:23.875 13:10:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.875 13:10:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:23.875 13:10:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.875 13:10:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:25.252 "name": "raid_bdev1", 00:25:25.252 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:25.252 "strip_size_kb": 64, 00:25:25.252 "state": "online", 00:25:25.252 "raid_level": "raid5f", 00:25:25.252 "superblock": false, 00:25:25.252 "num_base_bdevs": 4, 00:25:25.252 "num_base_bdevs_discovered": 4, 00:25:25.252 "num_base_bdevs_operational": 4, 00:25:25.252 "process": { 00:25:25.252 "type": "rebuild", 00:25:25.252 "target": "spare", 00:25:25.252 "progress": { 00:25:25.252 "blocks": 105600, 00:25:25.252 "percent": 53 00:25:25.252 } 00:25:25.252 }, 00:25:25.252 "base_bdevs_list": [ 00:25:25.252 { 00:25:25.252 "name": "spare", 00:25:25.252 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:25.252 "is_configured": true, 00:25:25.252 "data_offset": 0, 00:25:25.252 "data_size": 65536 00:25:25.252 }, 00:25:25.252 { 00:25:25.252 "name": "BaseBdev2", 00:25:25.252 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:25.252 "is_configured": true, 00:25:25.252 "data_offset": 0, 00:25:25.252 "data_size": 65536 00:25:25.252 }, 00:25:25.252 { 00:25:25.252 "name": "BaseBdev3", 00:25:25.252 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:25.252 "is_configured": true, 00:25:25.252 "data_offset": 0, 00:25:25.252 "data_size": 65536 00:25:25.252 }, 00:25:25.252 { 00:25:25.252 "name": "BaseBdev4", 00:25:25.252 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:25.252 "is_configured": true, 00:25:25.252 "data_offset": 0, 00:25:25.252 "data_size": 65536 00:25:25.252 } 00:25:25.252 ] 00:25:25.252 }' 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.252 13:10:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:26.187 13:10:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:26.187 13:10:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:26.187 13:10:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:26.187 13:10:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:26.187 13:10:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:26.187 13:10:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:26.187 13:10:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.187 13:10:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.445 13:10:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:26.445 "name": "raid_bdev1", 00:25:26.445 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:26.445 "strip_size_kb": 64, 00:25:26.445 "state": "online", 00:25:26.445 "raid_level": "raid5f", 00:25:26.445 "superblock": false, 00:25:26.445 "num_base_bdevs": 4, 00:25:26.445 "num_base_bdevs_discovered": 4, 00:25:26.445 "num_base_bdevs_operational": 4, 00:25:26.445 "process": { 00:25:26.445 "type": "rebuild", 00:25:26.445 "target": "spare", 00:25:26.445 "progress": { 00:25:26.445 "blocks": 130560, 00:25:26.445 "percent": 66 00:25:26.445 } 00:25:26.445 }, 00:25:26.445 "base_bdevs_list": [ 00:25:26.445 { 00:25:26.445 "name": "spare", 00:25:26.445 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:26.445 "is_configured": true, 00:25:26.445 "data_offset": 0, 00:25:26.445 "data_size": 65536 00:25:26.445 }, 00:25:26.445 { 00:25:26.445 "name": "BaseBdev2", 00:25:26.445 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:26.445 "is_configured": true, 00:25:26.445 "data_offset": 0, 00:25:26.445 "data_size": 65536 00:25:26.445 }, 00:25:26.445 { 00:25:26.445 "name": "BaseBdev3", 00:25:26.445 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:26.445 "is_configured": true, 00:25:26.445 "data_offset": 0, 00:25:26.445 "data_size": 65536 00:25:26.445 }, 00:25:26.445 { 00:25:26.445 "name": "BaseBdev4", 00:25:26.445 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:26.445 "is_configured": true, 00:25:26.445 "data_offset": 0, 00:25:26.445 "data_size": 65536 00:25:26.445 } 00:25:26.445 ] 00:25:26.445 }' 00:25:26.445 13:10:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:26.445 13:10:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:26.445 13:10:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:26.703 13:10:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:26.703 13:10:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:27.638 13:10:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:27.638 13:10:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.638 13:10:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:27.638 13:10:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:27.638 13:10:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:27.638 13:10:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:27.638 13:10:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.638 13:10:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.897 13:10:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:27.897 "name": "raid_bdev1", 00:25:27.897 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:27.897 "strip_size_kb": 64, 00:25:27.897 "state": "online", 00:25:27.897 "raid_level": "raid5f", 00:25:27.897 "superblock": false, 00:25:27.897 "num_base_bdevs": 4, 00:25:27.897 "num_base_bdevs_discovered": 4, 00:25:27.897 "num_base_bdevs_operational": 4, 00:25:27.897 "process": { 00:25:27.897 "type": "rebuild", 00:25:27.897 "target": "spare", 00:25:27.897 "progress": { 00:25:27.897 "blocks": 155520, 00:25:27.897 "percent": 79 00:25:27.897 } 00:25:27.897 }, 00:25:27.897 "base_bdevs_list": [ 00:25:27.897 { 00:25:27.897 "name": "spare", 00:25:27.898 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:27.898 "is_configured": true, 00:25:27.898 "data_offset": 0, 00:25:27.898 "data_size": 65536 00:25:27.898 }, 00:25:27.898 { 00:25:27.898 "name": "BaseBdev2", 00:25:27.898 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:27.898 "is_configured": true, 00:25:27.898 "data_offset": 0, 00:25:27.898 "data_size": 65536 00:25:27.898 }, 00:25:27.898 { 00:25:27.898 "name": "BaseBdev3", 00:25:27.898 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:27.898 "is_configured": true, 00:25:27.898 "data_offset": 0, 00:25:27.898 "data_size": 65536 00:25:27.898 }, 00:25:27.898 { 00:25:27.898 "name": "BaseBdev4", 00:25:27.898 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:27.898 "is_configured": true, 00:25:27.898 "data_offset": 0, 00:25:27.898 "data_size": 65536 00:25:27.898 } 00:25:27.898 ] 00:25:27.898 }' 00:25:27.898 13:10:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:27.898 13:10:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:27.898 13:10:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:27.898 13:10:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:27.898 13:10:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:29.274 "name": "raid_bdev1", 00:25:29.274 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:29.274 "strip_size_kb": 64, 00:25:29.274 "state": "online", 00:25:29.274 "raid_level": "raid5f", 00:25:29.274 "superblock": false, 00:25:29.274 "num_base_bdevs": 4, 00:25:29.274 "num_base_bdevs_discovered": 4, 00:25:29.274 "num_base_bdevs_operational": 4, 00:25:29.274 "process": { 00:25:29.274 "type": "rebuild", 00:25:29.274 "target": "spare", 00:25:29.274 "progress": { 00:25:29.274 "blocks": 182400, 00:25:29.274 "percent": 92 00:25:29.274 } 00:25:29.274 }, 00:25:29.274 "base_bdevs_list": [ 00:25:29.274 { 00:25:29.274 "name": "spare", 00:25:29.274 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:29.274 "is_configured": true, 00:25:29.274 "data_offset": 0, 00:25:29.274 "data_size": 65536 00:25:29.274 }, 00:25:29.274 { 00:25:29.274 "name": "BaseBdev2", 00:25:29.274 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:29.274 "is_configured": true, 00:25:29.274 "data_offset": 0, 00:25:29.274 "data_size": 65536 00:25:29.274 }, 00:25:29.274 { 00:25:29.274 "name": "BaseBdev3", 00:25:29.274 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:29.274 "is_configured": true, 00:25:29.274 "data_offset": 0, 00:25:29.274 "data_size": 65536 00:25:29.274 }, 00:25:29.274 { 00:25:29.274 "name": "BaseBdev4", 00:25:29.274 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:29.274 "is_configured": true, 00:25:29.274 "data_offset": 0, 00:25:29.274 "data_size": 65536 00:25:29.274 } 00:25:29.274 ] 00:25:29.274 }' 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:29.274 13:10:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:29.274 13:10:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:29.274 13:10:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:30.214 [2024-06-11 13:10:48.690958] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:30.214 [2024-06-11 13:10:48.691238] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:30.214 [2024-06-11 13:10:48.691437] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.214 13:10:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:30.215 13:10:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:30.215 13:10:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:30.215 13:10:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:30.215 13:10:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:30.215 13:10:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:30.215 13:10:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.215 13:10:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.473 13:10:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:30.473 "name": "raid_bdev1", 00:25:30.473 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:30.473 "strip_size_kb": 64, 00:25:30.473 "state": "online", 00:25:30.473 "raid_level": "raid5f", 00:25:30.473 "superblock": false, 00:25:30.473 "num_base_bdevs": 4, 00:25:30.473 "num_base_bdevs_discovered": 4, 00:25:30.473 "num_base_bdevs_operational": 4, 00:25:30.473 "base_bdevs_list": [ 00:25:30.473 { 00:25:30.473 "name": "spare", 00:25:30.473 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:30.473 "is_configured": true, 00:25:30.473 "data_offset": 0, 00:25:30.473 "data_size": 65536 00:25:30.473 }, 00:25:30.473 { 00:25:30.473 "name": "BaseBdev2", 00:25:30.473 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:30.473 "is_configured": true, 00:25:30.473 "data_offset": 0, 00:25:30.473 "data_size": 65536 00:25:30.473 }, 00:25:30.473 { 00:25:30.473 "name": "BaseBdev3", 00:25:30.473 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:30.473 "is_configured": true, 00:25:30.473 "data_offset": 0, 00:25:30.473 "data_size": 65536 00:25:30.473 }, 00:25:30.473 { 00:25:30.473 "name": "BaseBdev4", 00:25:30.473 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:30.473 "is_configured": true, 00:25:30.473 "data_offset": 0, 00:25:30.473 "data_size": 65536 00:25:30.473 } 00:25:30.473 ] 00:25:30.473 }' 00:25:30.473 13:10:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:30.473 13:10:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:30.473 13:10:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@660 -- # break 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:30.731 "name": "raid_bdev1", 00:25:30.731 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:30.731 "strip_size_kb": 64, 00:25:30.731 "state": "online", 00:25:30.731 "raid_level": "raid5f", 00:25:30.731 "superblock": false, 00:25:30.731 "num_base_bdevs": 4, 00:25:30.731 "num_base_bdevs_discovered": 4, 00:25:30.731 "num_base_bdevs_operational": 4, 00:25:30.731 "base_bdevs_list": [ 00:25:30.731 { 00:25:30.731 "name": "spare", 00:25:30.731 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:30.731 "is_configured": true, 00:25:30.731 "data_offset": 0, 00:25:30.731 "data_size": 65536 00:25:30.731 }, 00:25:30.731 { 00:25:30.731 "name": "BaseBdev2", 00:25:30.731 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:30.731 "is_configured": true, 00:25:30.731 "data_offset": 0, 00:25:30.731 "data_size": 65536 00:25:30.731 }, 00:25:30.731 { 00:25:30.731 "name": "BaseBdev3", 00:25:30.731 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:30.731 "is_configured": true, 00:25:30.731 "data_offset": 0, 00:25:30.731 "data_size": 65536 00:25:30.731 }, 00:25:30.731 { 00:25:30.731 "name": "BaseBdev4", 00:25:30.731 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:30.731 "is_configured": true, 00:25:30.731 "data_offset": 0, 00:25:30.731 "data_size": 65536 00:25:30.731 } 00:25:30.731 ] 00:25:30.731 }' 00:25:30.731 13:10:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:30.989 13:10:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:30.989 13:10:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:30.989 13:10:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:30.989 13:10:49 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.990 13:10:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.248 13:10:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.248 "name": "raid_bdev1", 00:25:31.248 "uuid": "f88dfc05-2624-47e7-9d8f-3cec76e12c40", 00:25:31.248 "strip_size_kb": 64, 00:25:31.248 "state": "online", 00:25:31.248 "raid_level": "raid5f", 00:25:31.248 "superblock": false, 00:25:31.248 "num_base_bdevs": 4, 00:25:31.248 "num_base_bdevs_discovered": 4, 00:25:31.248 "num_base_bdevs_operational": 4, 00:25:31.248 "base_bdevs_list": [ 00:25:31.248 { 00:25:31.248 "name": "spare", 00:25:31.248 "uuid": "f27c8624-faf8-5625-bc38-d0b17c8e14fd", 00:25:31.248 "is_configured": true, 00:25:31.248 "data_offset": 0, 00:25:31.248 "data_size": 65536 00:25:31.248 }, 00:25:31.248 { 00:25:31.248 "name": "BaseBdev2", 00:25:31.248 "uuid": "3203f586-eb24-4e40-9b14-24b267eecd0e", 00:25:31.248 "is_configured": true, 00:25:31.248 "data_offset": 0, 00:25:31.248 "data_size": 65536 00:25:31.248 }, 00:25:31.248 { 00:25:31.248 "name": "BaseBdev3", 00:25:31.248 "uuid": "a7bf5aa4-6ae1-4d82-960b-acb0d53f96c7", 00:25:31.248 "is_configured": true, 00:25:31.248 "data_offset": 0, 00:25:31.248 "data_size": 65536 00:25:31.248 }, 00:25:31.248 { 00:25:31.248 "name": "BaseBdev4", 00:25:31.248 "uuid": "09305ad7-c35f-412b-a02f-a42f84c9ce7b", 00:25:31.248 "is_configured": true, 00:25:31.248 "data_offset": 0, 00:25:31.248 "data_size": 65536 00:25:31.248 } 00:25:31.248 ] 00:25:31.248 }' 00:25:31.248 13:10:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.248 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:25:31.813 13:10:50 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:32.070 [2024-06-11 13:10:50.807369] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:32.070 [2024-06-11 13:10:50.807654] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:32.070 [2024-06-11 13:10:50.807896] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:32.070 [2024-06-11 13:10:50.808152] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:32.070 [2024-06-11 13:10:50.808260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:25:32.070 13:10:50 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.070 13:10:50 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:32.328 13:10:51 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:32.328 13:10:51 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:32.328 13:10:51 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@12 -- # local i 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:32.328 13:10:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:32.586 /dev/nbd0 00:25:32.586 13:10:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:32.586 13:10:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:32.586 13:10:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:32.586 13:10:51 -- common/autotest_common.sh@857 -- # local i 00:25:32.586 13:10:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:32.586 13:10:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:32.586 13:10:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:32.586 13:10:51 -- common/autotest_common.sh@861 -- # break 00:25:32.586 13:10:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:32.586 13:10:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:32.586 13:10:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:32.586 1+0 records in 00:25:32.586 1+0 records out 00:25:32.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050074 s, 8.2 MB/s 00:25:32.586 13:10:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:32.586 13:10:51 -- common/autotest_common.sh@874 -- # size=4096 00:25:32.586 13:10:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:32.586 13:10:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:32.586 13:10:51 -- common/autotest_common.sh@877 -- # return 0 00:25:32.586 13:10:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:32.586 13:10:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:32.586 13:10:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:32.845 /dev/nbd1 00:25:32.845 13:10:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:32.845 13:10:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:32.845 13:10:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:32.845 13:10:51 -- common/autotest_common.sh@857 -- # local i 00:25:32.845 13:10:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:32.845 13:10:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:32.845 13:10:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:32.845 13:10:51 -- common/autotest_common.sh@861 -- # break 00:25:32.845 13:10:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:32.845 13:10:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:32.845 13:10:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:32.845 1+0 records in 00:25:32.845 1+0 records out 00:25:32.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052311 s, 7.8 MB/s 00:25:32.845 13:10:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:32.845 13:10:51 -- common/autotest_common.sh@874 -- # size=4096 00:25:32.845 13:10:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:32.845 13:10:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:32.845 13:10:51 -- common/autotest_common.sh@877 -- # return 0 00:25:32.845 13:10:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:32.845 13:10:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:32.845 13:10:51 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:33.103 13:10:51 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:33.103 13:10:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:33.103 13:10:51 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@51 -- # local i 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@41 -- # break 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@45 -- # return 0 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:33.104 13:10:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:33.362 13:10:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:33.362 13:10:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:33.362 13:10:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:33.362 13:10:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:33.362 13:10:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:33.362 13:10:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:33.362 13:10:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:33.618 13:10:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:33.618 13:10:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:33.618 13:10:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:33.618 13:10:52 -- bdev/nbd_common.sh@41 -- # break 00:25:33.618 13:10:52 -- bdev/nbd_common.sh@45 -- # return 0 00:25:33.618 13:10:52 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:33.618 13:10:52 -- bdev/bdev_raid.sh@709 -- # killprocess 134869 00:25:33.618 13:10:52 -- common/autotest_common.sh@926 -- # '[' -z 134869 ']' 00:25:33.618 13:10:52 -- common/autotest_common.sh@930 -- # kill -0 134869 00:25:33.618 13:10:52 -- common/autotest_common.sh@931 -- # uname 00:25:33.618 13:10:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:33.618 13:10:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134869 00:25:33.618 13:10:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:33.618 13:10:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:33.618 13:10:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134869' 00:25:33.618 killing process with pid 134869 00:25:33.618 13:10:52 -- common/autotest_common.sh@945 -- # kill 134869 00:25:33.618 Received shutdown signal, test time was about 60.000000 seconds 00:25:33.618 00:25:33.618 Latency(us) 00:25:33.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.618 =================================================================================================================== 00:25:33.618 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:33.618 13:10:52 -- common/autotest_common.sh@950 -- # wait 134869 00:25:33.618 [2024-06-11 13:10:52.268875] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:33.875 [2024-06-11 13:10:52.607554] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:34.809 ************************************ 00:25:34.809 END TEST raid5f_rebuild_test 00:25:34.809 ************************************ 00:25:34.809 13:10:53 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:34.809 00:25:34.809 real 0m25.136s 00:25:34.809 user 0m36.884s 00:25:34.809 sys 0m2.519s 00:25:34.809 13:10:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.809 13:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:25:35.067 13:10:53 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:35.067 13:10:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:35.067 13:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.067 ************************************ 00:25:35.067 START TEST raid5f_rebuild_test_sb 00:25:35.067 ************************************ 00:25:35.067 13:10:53 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:35.067 13:10:53 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:35.068 13:10:53 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:35.068 13:10:53 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:35.068 13:10:53 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:35.068 13:10:53 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:35.068 13:10:53 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:35.068 13:10:53 -- bdev/bdev_raid.sh@544 -- # raid_pid=135539 00:25:35.068 13:10:53 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:35.068 13:10:53 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135539 /var/tmp/spdk-raid.sock 00:25:35.068 13:10:53 -- common/autotest_common.sh@819 -- # '[' -z 135539 ']' 00:25:35.068 13:10:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:35.068 13:10:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:35.068 13:10:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:35.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:35.068 13:10:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:35.068 13:10:53 -- common/autotest_common.sh@10 -- # set +x 00:25:35.068 [2024-06-11 13:10:53.755526] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:35.068 [2024-06-11 13:10:53.755972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135539 ] 00:25:35.068 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:35.068 Zero copy mechanism will not be used. 00:25:35.068 [2024-06-11 13:10:53.907751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.325 [2024-06-11 13:10:54.117675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.583 [2024-06-11 13:10:54.307794] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.841 13:10:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:35.841 13:10:54 -- common/autotest_common.sh@852 -- # return 0 00:25:35.841 13:10:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:35.841 13:10:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:35.841 13:10:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:36.099 BaseBdev1_malloc 00:25:36.099 13:10:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:36.357 [2024-06-11 13:10:55.088853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:36.357 [2024-06-11 13:10:55.089201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.357 [2024-06-11 13:10:55.089277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:36.357 [2024-06-11 13:10:55.089635] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.357 [2024-06-11 13:10:55.092139] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.357 [2024-06-11 13:10:55.092313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:36.357 BaseBdev1 00:25:36.357 13:10:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:36.357 13:10:55 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:36.357 13:10:55 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:36.615 BaseBdev2_malloc 00:25:36.615 13:10:55 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:36.873 [2024-06-11 13:10:55.505498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:36.873 [2024-06-11 13:10:55.505735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.873 [2024-06-11 13:10:55.505817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:36.873 [2024-06-11 13:10:55.506137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.873 [2024-06-11 13:10:55.508554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.873 [2024-06-11 13:10:55.508735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:36.873 BaseBdev2 00:25:36.873 13:10:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:36.873 13:10:55 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:36.873 13:10:55 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:37.131 BaseBdev3_malloc 00:25:37.131 13:10:55 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:37.131 [2024-06-11 13:10:55.962875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:37.131 [2024-06-11 13:10:55.963138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.131 [2024-06-11 13:10:55.963226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:37.131 [2024-06-11 13:10:55.963493] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.131 [2024-06-11 13:10:55.965960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.131 [2024-06-11 13:10:55.966145] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:37.131 BaseBdev3 00:25:37.389 13:10:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:37.389 13:10:55 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:37.389 13:10:55 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:37.389 BaseBdev4_malloc 00:25:37.389 13:10:56 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:37.648 [2024-06-11 13:10:56.391957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:37.648 [2024-06-11 13:10:56.392201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.648 [2024-06-11 13:10:56.392273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:37.648 [2024-06-11 13:10:56.392603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.648 [2024-06-11 13:10:56.395072] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.648 [2024-06-11 13:10:56.395250] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:37.648 BaseBdev4 00:25:37.648 13:10:56 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:37.906 spare_malloc 00:25:37.906 13:10:56 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:38.164 spare_delay 00:25:38.164 13:10:56 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:38.423 [2024-06-11 13:10:57.062054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:38.423 [2024-06-11 13:10:57.062291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.423 [2024-06-11 13:10:57.062435] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:38.423 [2024-06-11 13:10:57.062579] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.423 [2024-06-11 13:10:57.065015] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.423 [2024-06-11 13:10:57.065205] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:38.423 spare 00:25:38.423 13:10:57 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:38.682 [2024-06-11 13:10:57.306184] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:38.682 [2024-06-11 13:10:57.308265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:38.682 [2024-06-11 13:10:57.308482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:38.682 [2024-06-11 13:10:57.308580] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:38.682 [2024-06-11 13:10:57.308922] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:38.682 [2024-06-11 13:10:57.308969] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:38.682 [2024-06-11 13:10:57.309192] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:38.682 [2024-06-11 13:10:57.314886] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:38.682 [2024-06-11 13:10:57.315040] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:38.682 [2024-06-11 13:10:57.315323] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:38.682 "name": "raid_bdev1", 00:25:38.682 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:38.682 "strip_size_kb": 64, 00:25:38.682 "state": "online", 00:25:38.682 "raid_level": "raid5f", 00:25:38.682 "superblock": true, 00:25:38.682 "num_base_bdevs": 4, 00:25:38.682 "num_base_bdevs_discovered": 4, 00:25:38.682 "num_base_bdevs_operational": 4, 00:25:38.682 "base_bdevs_list": [ 00:25:38.682 { 00:25:38.682 "name": "BaseBdev1", 00:25:38.682 "uuid": "37662f56-3806-584b-ad1b-da925de1d825", 00:25:38.682 "is_configured": true, 00:25:38.682 "data_offset": 2048, 00:25:38.682 "data_size": 63488 00:25:38.682 }, 00:25:38.682 { 00:25:38.682 "name": "BaseBdev2", 00:25:38.682 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:38.682 "is_configured": true, 00:25:38.682 "data_offset": 2048, 00:25:38.682 "data_size": 63488 00:25:38.682 }, 00:25:38.682 { 00:25:38.682 "name": "BaseBdev3", 00:25:38.682 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:38.682 "is_configured": true, 00:25:38.682 "data_offset": 2048, 00:25:38.682 "data_size": 63488 00:25:38.682 }, 00:25:38.682 { 00:25:38.682 "name": "BaseBdev4", 00:25:38.682 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:38.682 "is_configured": true, 00:25:38.682 "data_offset": 2048, 00:25:38.682 "data_size": 63488 00:25:38.682 } 00:25:38.682 ] 00:25:38.682 }' 00:25:38.682 13:10:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:38.682 13:10:57 -- common/autotest_common.sh@10 -- # set +x 00:25:39.618 13:10:58 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:39.618 13:10:58 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:39.618 [2024-06-11 13:10:58.354165] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:39.618 13:10:58 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:25:39.618 13:10:58 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.618 13:10:58 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:39.876 13:10:58 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:39.876 13:10:58 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:39.876 13:10:58 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:39.876 13:10:58 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@12 -- # local i 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:39.876 13:10:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:40.136 [2024-06-11 13:10:58.746136] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:40.136 /dev/nbd0 00:25:40.136 13:10:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:40.136 13:10:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:40.136 13:10:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:40.136 13:10:58 -- common/autotest_common.sh@857 -- # local i 00:25:40.136 13:10:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:40.136 13:10:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:40.136 13:10:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:40.136 13:10:58 -- common/autotest_common.sh@861 -- # break 00:25:40.136 13:10:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:40.136 13:10:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:40.136 13:10:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:40.136 1+0 records in 00:25:40.136 1+0 records out 00:25:40.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047929 s, 8.5 MB/s 00:25:40.136 13:10:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:40.136 13:10:58 -- common/autotest_common.sh@874 -- # size=4096 00:25:40.136 13:10:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:40.136 13:10:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:40.136 13:10:58 -- common/autotest_common.sh@877 -- # return 0 00:25:40.136 13:10:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:40.136 13:10:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:40.136 13:10:58 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:40.136 13:10:58 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:40.136 13:10:58 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:40.136 13:10:58 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:25:40.704 496+0 records in 00:25:40.704 496+0 records out 00:25:40.704 97517568 bytes (98 MB, 93 MiB) copied, 0.517652 s, 188 MB/s 00:25:40.704 13:10:59 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:40.704 13:10:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:40.704 13:10:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:40.704 13:10:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:40.704 13:10:59 -- bdev/nbd_common.sh@51 -- # local i 00:25:40.704 13:10:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:40.704 13:10:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:40.972 13:10:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:40.972 13:10:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:40.972 13:10:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:40.972 13:10:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:40.972 13:10:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:40.972 13:10:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:40.972 [2024-06-11 13:10:59.600217] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.972 13:10:59 -- bdev/nbd_common.sh@41 -- # break 00:25:40.972 13:10:59 -- bdev/nbd_common.sh@45 -- # return 0 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:40.972 [2024-06-11 13:10:59.776089] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.972 13:10:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.242 13:10:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:41.242 "name": "raid_bdev1", 00:25:41.242 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:41.242 "strip_size_kb": 64, 00:25:41.242 "state": "online", 00:25:41.242 "raid_level": "raid5f", 00:25:41.242 "superblock": true, 00:25:41.242 "num_base_bdevs": 4, 00:25:41.242 "num_base_bdevs_discovered": 3, 00:25:41.242 "num_base_bdevs_operational": 3, 00:25:41.242 "base_bdevs_list": [ 00:25:41.242 { 00:25:41.242 "name": null, 00:25:41.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.242 "is_configured": false, 00:25:41.242 "data_offset": 2048, 00:25:41.242 "data_size": 63488 00:25:41.242 }, 00:25:41.242 { 00:25:41.242 "name": "BaseBdev2", 00:25:41.242 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:41.242 "is_configured": true, 00:25:41.242 "data_offset": 2048, 00:25:41.242 "data_size": 63488 00:25:41.242 }, 00:25:41.242 { 00:25:41.242 "name": "BaseBdev3", 00:25:41.242 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:41.242 "is_configured": true, 00:25:41.242 "data_offset": 2048, 00:25:41.242 "data_size": 63488 00:25:41.242 }, 00:25:41.242 { 00:25:41.242 "name": "BaseBdev4", 00:25:41.242 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:41.242 "is_configured": true, 00:25:41.242 "data_offset": 2048, 00:25:41.242 "data_size": 63488 00:25:41.242 } 00:25:41.242 ] 00:25:41.242 }' 00:25:41.242 13:10:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:41.242 13:10:59 -- common/autotest_common.sh@10 -- # set +x 00:25:42.179 13:11:00 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:42.179 [2024-06-11 13:11:00.872292] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:42.179 [2024-06-11 13:11:00.872571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:42.179 [2024-06-11 13:11:00.883792] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c860 00:25:42.179 [2024-06-11 13:11:00.891216] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:42.179 13:11:00 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:43.115 13:11:01 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.115 13:11:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:43.115 13:11:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:43.115 13:11:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:43.115 13:11:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:43.115 13:11:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.115 13:11:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.376 13:11:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:43.376 "name": "raid_bdev1", 00:25:43.376 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:43.376 "strip_size_kb": 64, 00:25:43.376 "state": "online", 00:25:43.376 "raid_level": "raid5f", 00:25:43.376 "superblock": true, 00:25:43.376 "num_base_bdevs": 4, 00:25:43.376 "num_base_bdevs_discovered": 4, 00:25:43.376 "num_base_bdevs_operational": 4, 00:25:43.376 "process": { 00:25:43.376 "type": "rebuild", 00:25:43.376 "target": "spare", 00:25:43.376 "progress": { 00:25:43.376 "blocks": 23040, 00:25:43.376 "percent": 12 00:25:43.376 } 00:25:43.376 }, 00:25:43.376 "base_bdevs_list": [ 00:25:43.376 { 00:25:43.376 "name": "spare", 00:25:43.376 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:43.376 "is_configured": true, 00:25:43.376 "data_offset": 2048, 00:25:43.376 "data_size": 63488 00:25:43.376 }, 00:25:43.376 { 00:25:43.376 "name": "BaseBdev2", 00:25:43.376 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:43.376 "is_configured": true, 00:25:43.376 "data_offset": 2048, 00:25:43.376 "data_size": 63488 00:25:43.376 }, 00:25:43.376 { 00:25:43.376 "name": "BaseBdev3", 00:25:43.376 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:43.376 "is_configured": true, 00:25:43.376 "data_offset": 2048, 00:25:43.376 "data_size": 63488 00:25:43.376 }, 00:25:43.376 { 00:25:43.376 "name": "BaseBdev4", 00:25:43.376 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:43.376 "is_configured": true, 00:25:43.376 "data_offset": 2048, 00:25:43.376 "data_size": 63488 00:25:43.376 } 00:25:43.376 ] 00:25:43.376 }' 00:25:43.377 13:11:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:43.377 13:11:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:43.377 13:11:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:43.634 13:11:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:43.634 13:11:02 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:43.634 [2024-06-11 13:11:02.460915] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:43.892 [2024-06-11 13:11:02.504516] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:43.892 [2024-06-11 13:11:02.504771] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.892 13:11:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.150 13:11:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:44.150 "name": "raid_bdev1", 00:25:44.150 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:44.150 "strip_size_kb": 64, 00:25:44.150 "state": "online", 00:25:44.150 "raid_level": "raid5f", 00:25:44.150 "superblock": true, 00:25:44.150 "num_base_bdevs": 4, 00:25:44.150 "num_base_bdevs_discovered": 3, 00:25:44.150 "num_base_bdevs_operational": 3, 00:25:44.150 "base_bdevs_list": [ 00:25:44.150 { 00:25:44.150 "name": null, 00:25:44.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.150 "is_configured": false, 00:25:44.150 "data_offset": 2048, 00:25:44.150 "data_size": 63488 00:25:44.150 }, 00:25:44.150 { 00:25:44.150 "name": "BaseBdev2", 00:25:44.150 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:44.150 "is_configured": true, 00:25:44.150 "data_offset": 2048, 00:25:44.150 "data_size": 63488 00:25:44.150 }, 00:25:44.150 { 00:25:44.150 "name": "BaseBdev3", 00:25:44.150 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:44.150 "is_configured": true, 00:25:44.150 "data_offset": 2048, 00:25:44.150 "data_size": 63488 00:25:44.150 }, 00:25:44.150 { 00:25:44.150 "name": "BaseBdev4", 00:25:44.150 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:44.150 "is_configured": true, 00:25:44.150 "data_offset": 2048, 00:25:44.150 "data_size": 63488 00:25:44.150 } 00:25:44.150 ] 00:25:44.150 }' 00:25:44.150 13:11:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:44.150 13:11:02 -- common/autotest_common.sh@10 -- # set +x 00:25:44.716 13:11:03 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:44.716 13:11:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:44.716 13:11:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:44.716 13:11:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:44.716 13:11:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:44.716 13:11:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.716 13:11:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.975 13:11:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:44.975 "name": "raid_bdev1", 00:25:44.975 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:44.975 "strip_size_kb": 64, 00:25:44.975 "state": "online", 00:25:44.975 "raid_level": "raid5f", 00:25:44.975 "superblock": true, 00:25:44.975 "num_base_bdevs": 4, 00:25:44.975 "num_base_bdevs_discovered": 3, 00:25:44.975 "num_base_bdevs_operational": 3, 00:25:44.975 "base_bdevs_list": [ 00:25:44.975 { 00:25:44.975 "name": null, 00:25:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.975 "is_configured": false, 00:25:44.975 "data_offset": 2048, 00:25:44.975 "data_size": 63488 00:25:44.975 }, 00:25:44.975 { 00:25:44.976 "name": "BaseBdev2", 00:25:44.976 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:44.976 "is_configured": true, 00:25:44.976 "data_offset": 2048, 00:25:44.976 "data_size": 63488 00:25:44.976 }, 00:25:44.976 { 00:25:44.976 "name": "BaseBdev3", 00:25:44.976 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:44.976 "is_configured": true, 00:25:44.976 "data_offset": 2048, 00:25:44.976 "data_size": 63488 00:25:44.976 }, 00:25:44.976 { 00:25:44.976 "name": "BaseBdev4", 00:25:44.976 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:44.976 "is_configured": true, 00:25:44.976 "data_offset": 2048, 00:25:44.976 "data_size": 63488 00:25:44.976 } 00:25:44.976 ] 00:25:44.976 }' 00:25:44.976 13:11:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:44.976 13:11:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:44.976 13:11:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:44.976 13:11:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:44.976 13:11:03 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:45.234 [2024-06-11 13:11:03.952364] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:45.234 [2024-06-11 13:11:03.952582] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:45.234 [2024-06-11 13:11:03.962896] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ca00 00:25:45.234 [2024-06-11 13:11:03.970139] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:45.234 13:11:03 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:46.169 13:11:04 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.169 13:11:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:46.169 13:11:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:46.169 13:11:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:46.169 13:11:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:46.169 13:11:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.169 13:11:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.428 13:11:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:46.428 "name": "raid_bdev1", 00:25:46.428 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:46.428 "strip_size_kb": 64, 00:25:46.428 "state": "online", 00:25:46.428 "raid_level": "raid5f", 00:25:46.428 "superblock": true, 00:25:46.428 "num_base_bdevs": 4, 00:25:46.428 "num_base_bdevs_discovered": 4, 00:25:46.428 "num_base_bdevs_operational": 4, 00:25:46.428 "process": { 00:25:46.428 "type": "rebuild", 00:25:46.428 "target": "spare", 00:25:46.428 "progress": { 00:25:46.428 "blocks": 21120, 00:25:46.428 "percent": 11 00:25:46.428 } 00:25:46.428 }, 00:25:46.428 "base_bdevs_list": [ 00:25:46.428 { 00:25:46.428 "name": "spare", 00:25:46.428 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:46.428 "is_configured": true, 00:25:46.428 "data_offset": 2048, 00:25:46.428 "data_size": 63488 00:25:46.428 }, 00:25:46.428 { 00:25:46.428 "name": "BaseBdev2", 00:25:46.428 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:46.428 "is_configured": true, 00:25:46.428 "data_offset": 2048, 00:25:46.428 "data_size": 63488 00:25:46.428 }, 00:25:46.428 { 00:25:46.428 "name": "BaseBdev3", 00:25:46.428 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:46.428 "is_configured": true, 00:25:46.428 "data_offset": 2048, 00:25:46.428 "data_size": 63488 00:25:46.428 }, 00:25:46.428 { 00:25:46.428 "name": "BaseBdev4", 00:25:46.428 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:46.428 "is_configured": true, 00:25:46.428 "data_offset": 2048, 00:25:46.428 "data_size": 63488 00:25:46.428 } 00:25:46.428 ] 00:25:46.428 }' 00:25:46.428 13:11:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:46.428 13:11:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.428 13:11:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:46.687 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@657 -- # local timeout=737 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.687 13:11:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.945 13:11:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:46.945 "name": "raid_bdev1", 00:25:46.945 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:46.945 "strip_size_kb": 64, 00:25:46.945 "state": "online", 00:25:46.945 "raid_level": "raid5f", 00:25:46.945 "superblock": true, 00:25:46.945 "num_base_bdevs": 4, 00:25:46.945 "num_base_bdevs_discovered": 4, 00:25:46.945 "num_base_bdevs_operational": 4, 00:25:46.945 "process": { 00:25:46.945 "type": "rebuild", 00:25:46.945 "target": "spare", 00:25:46.945 "progress": { 00:25:46.945 "blocks": 28800, 00:25:46.945 "percent": 15 00:25:46.945 } 00:25:46.945 }, 00:25:46.945 "base_bdevs_list": [ 00:25:46.945 { 00:25:46.945 "name": "spare", 00:25:46.945 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:46.945 "is_configured": true, 00:25:46.945 "data_offset": 2048, 00:25:46.945 "data_size": 63488 00:25:46.945 }, 00:25:46.945 { 00:25:46.945 "name": "BaseBdev2", 00:25:46.945 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:46.945 "is_configured": true, 00:25:46.945 "data_offset": 2048, 00:25:46.945 "data_size": 63488 00:25:46.945 }, 00:25:46.945 { 00:25:46.945 "name": "BaseBdev3", 00:25:46.945 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:46.945 "is_configured": true, 00:25:46.945 "data_offset": 2048, 00:25:46.945 "data_size": 63488 00:25:46.945 }, 00:25:46.945 { 00:25:46.946 "name": "BaseBdev4", 00:25:46.946 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:46.946 "is_configured": true, 00:25:46.946 "data_offset": 2048, 00:25:46.946 "data_size": 63488 00:25:46.946 } 00:25:46.946 ] 00:25:46.946 }' 00:25:46.946 13:11:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:46.946 13:11:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.946 13:11:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:46.946 13:11:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.946 13:11:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:47.881 13:11:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:47.881 13:11:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.881 13:11:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:47.881 13:11:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:47.881 13:11:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:47.881 13:11:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:47.881 13:11:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.881 13:11:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.140 13:11:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:48.140 "name": "raid_bdev1", 00:25:48.140 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:48.140 "strip_size_kb": 64, 00:25:48.140 "state": "online", 00:25:48.140 "raid_level": "raid5f", 00:25:48.140 "superblock": true, 00:25:48.140 "num_base_bdevs": 4, 00:25:48.140 "num_base_bdevs_discovered": 4, 00:25:48.140 "num_base_bdevs_operational": 4, 00:25:48.140 "process": { 00:25:48.140 "type": "rebuild", 00:25:48.140 "target": "spare", 00:25:48.140 "progress": { 00:25:48.140 "blocks": 53760, 00:25:48.140 "percent": 28 00:25:48.140 } 00:25:48.140 }, 00:25:48.140 "base_bdevs_list": [ 00:25:48.140 { 00:25:48.140 "name": "spare", 00:25:48.140 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:48.140 "is_configured": true, 00:25:48.140 "data_offset": 2048, 00:25:48.140 "data_size": 63488 00:25:48.140 }, 00:25:48.140 { 00:25:48.140 "name": "BaseBdev2", 00:25:48.140 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:48.140 "is_configured": true, 00:25:48.140 "data_offset": 2048, 00:25:48.140 "data_size": 63488 00:25:48.140 }, 00:25:48.140 { 00:25:48.140 "name": "BaseBdev3", 00:25:48.140 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:48.140 "is_configured": true, 00:25:48.140 "data_offset": 2048, 00:25:48.140 "data_size": 63488 00:25:48.140 }, 00:25:48.140 { 00:25:48.140 "name": "BaseBdev4", 00:25:48.140 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:48.140 "is_configured": true, 00:25:48.140 "data_offset": 2048, 00:25:48.140 "data_size": 63488 00:25:48.140 } 00:25:48.140 ] 00:25:48.140 }' 00:25:48.140 13:11:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:48.140 13:11:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.140 13:11:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:48.399 13:11:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.399 13:11:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:49.335 13:11:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:49.335 13:11:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.335 13:11:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:49.335 13:11:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:49.335 13:11:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:49.335 13:11:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:49.335 13:11:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.335 13:11:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.594 13:11:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:49.594 "name": "raid_bdev1", 00:25:49.594 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:49.594 "strip_size_kb": 64, 00:25:49.594 "state": "online", 00:25:49.594 "raid_level": "raid5f", 00:25:49.594 "superblock": true, 00:25:49.594 "num_base_bdevs": 4, 00:25:49.594 "num_base_bdevs_discovered": 4, 00:25:49.594 "num_base_bdevs_operational": 4, 00:25:49.594 "process": { 00:25:49.594 "type": "rebuild", 00:25:49.594 "target": "spare", 00:25:49.594 "progress": { 00:25:49.594 "blocks": 80640, 00:25:49.594 "percent": 42 00:25:49.594 } 00:25:49.594 }, 00:25:49.594 "base_bdevs_list": [ 00:25:49.594 { 00:25:49.594 "name": "spare", 00:25:49.594 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:49.594 "is_configured": true, 00:25:49.594 "data_offset": 2048, 00:25:49.594 "data_size": 63488 00:25:49.594 }, 00:25:49.594 { 00:25:49.594 "name": "BaseBdev2", 00:25:49.594 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:49.594 "is_configured": true, 00:25:49.594 "data_offset": 2048, 00:25:49.594 "data_size": 63488 00:25:49.594 }, 00:25:49.594 { 00:25:49.594 "name": "BaseBdev3", 00:25:49.594 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:49.594 "is_configured": true, 00:25:49.594 "data_offset": 2048, 00:25:49.594 "data_size": 63488 00:25:49.594 }, 00:25:49.594 { 00:25:49.594 "name": "BaseBdev4", 00:25:49.594 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:49.594 "is_configured": true, 00:25:49.594 "data_offset": 2048, 00:25:49.594 "data_size": 63488 00:25:49.594 } 00:25:49.594 ] 00:25:49.594 }' 00:25:49.594 13:11:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:49.594 13:11:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.594 13:11:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:49.594 13:11:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.594 13:11:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:50.968 "name": "raid_bdev1", 00:25:50.968 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:50.968 "strip_size_kb": 64, 00:25:50.968 "state": "online", 00:25:50.968 "raid_level": "raid5f", 00:25:50.968 "superblock": true, 00:25:50.968 "num_base_bdevs": 4, 00:25:50.968 "num_base_bdevs_discovered": 4, 00:25:50.968 "num_base_bdevs_operational": 4, 00:25:50.968 "process": { 00:25:50.968 "type": "rebuild", 00:25:50.968 "target": "spare", 00:25:50.968 "progress": { 00:25:50.968 "blocks": 105600, 00:25:50.968 "percent": 55 00:25:50.968 } 00:25:50.968 }, 00:25:50.968 "base_bdevs_list": [ 00:25:50.968 { 00:25:50.968 "name": "spare", 00:25:50.968 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:50.968 "is_configured": true, 00:25:50.968 "data_offset": 2048, 00:25:50.968 "data_size": 63488 00:25:50.968 }, 00:25:50.968 { 00:25:50.968 "name": "BaseBdev2", 00:25:50.968 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:50.968 "is_configured": true, 00:25:50.968 "data_offset": 2048, 00:25:50.968 "data_size": 63488 00:25:50.968 }, 00:25:50.968 { 00:25:50.968 "name": "BaseBdev3", 00:25:50.968 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:50.968 "is_configured": true, 00:25:50.968 "data_offset": 2048, 00:25:50.968 "data_size": 63488 00:25:50.968 }, 00:25:50.968 { 00:25:50.968 "name": "BaseBdev4", 00:25:50.968 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:50.968 "is_configured": true, 00:25:50.968 "data_offset": 2048, 00:25:50.968 "data_size": 63488 00:25:50.968 } 00:25:50.968 ] 00:25:50.968 }' 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.968 13:11:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:51.901 13:11:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:51.901 13:11:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.901 13:11:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:51.901 13:11:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:51.901 13:11:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:51.901 13:11:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:51.901 13:11:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.901 13:11:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.160 13:11:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:52.160 "name": "raid_bdev1", 00:25:52.160 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:52.160 "strip_size_kb": 64, 00:25:52.160 "state": "online", 00:25:52.160 "raid_level": "raid5f", 00:25:52.160 "superblock": true, 00:25:52.160 "num_base_bdevs": 4, 00:25:52.160 "num_base_bdevs_discovered": 4, 00:25:52.160 "num_base_bdevs_operational": 4, 00:25:52.160 "process": { 00:25:52.160 "type": "rebuild", 00:25:52.160 "target": "spare", 00:25:52.160 "progress": { 00:25:52.160 "blocks": 132480, 00:25:52.160 "percent": 69 00:25:52.160 } 00:25:52.160 }, 00:25:52.160 "base_bdevs_list": [ 00:25:52.160 { 00:25:52.160 "name": "spare", 00:25:52.160 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:52.160 "is_configured": true, 00:25:52.160 "data_offset": 2048, 00:25:52.160 "data_size": 63488 00:25:52.160 }, 00:25:52.160 { 00:25:52.160 "name": "BaseBdev2", 00:25:52.160 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:52.160 "is_configured": true, 00:25:52.160 "data_offset": 2048, 00:25:52.160 "data_size": 63488 00:25:52.160 }, 00:25:52.160 { 00:25:52.160 "name": "BaseBdev3", 00:25:52.160 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:52.160 "is_configured": true, 00:25:52.160 "data_offset": 2048, 00:25:52.160 "data_size": 63488 00:25:52.160 }, 00:25:52.160 { 00:25:52.160 "name": "BaseBdev4", 00:25:52.160 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:52.160 "is_configured": true, 00:25:52.160 "data_offset": 2048, 00:25:52.160 "data_size": 63488 00:25:52.160 } 00:25:52.160 ] 00:25:52.160 }' 00:25:52.160 13:11:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:52.447 13:11:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.447 13:11:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:52.447 13:11:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.447 13:11:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:53.411 13:11:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:53.411 13:11:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.411 13:11:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:53.411 13:11:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:53.411 13:11:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:53.411 13:11:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:53.411 13:11:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.411 13:11:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.669 13:11:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:53.669 "name": "raid_bdev1", 00:25:53.669 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:53.669 "strip_size_kb": 64, 00:25:53.669 "state": "online", 00:25:53.669 "raid_level": "raid5f", 00:25:53.669 "superblock": true, 00:25:53.669 "num_base_bdevs": 4, 00:25:53.669 "num_base_bdevs_discovered": 4, 00:25:53.669 "num_base_bdevs_operational": 4, 00:25:53.669 "process": { 00:25:53.669 "type": "rebuild", 00:25:53.669 "target": "spare", 00:25:53.669 "progress": { 00:25:53.669 "blocks": 157440, 00:25:53.669 "percent": 82 00:25:53.669 } 00:25:53.669 }, 00:25:53.669 "base_bdevs_list": [ 00:25:53.669 { 00:25:53.669 "name": "spare", 00:25:53.669 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:53.669 "is_configured": true, 00:25:53.669 "data_offset": 2048, 00:25:53.669 "data_size": 63488 00:25:53.669 }, 00:25:53.669 { 00:25:53.669 "name": "BaseBdev2", 00:25:53.669 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:53.669 "is_configured": true, 00:25:53.669 "data_offset": 2048, 00:25:53.669 "data_size": 63488 00:25:53.669 }, 00:25:53.669 { 00:25:53.669 "name": "BaseBdev3", 00:25:53.669 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:53.669 "is_configured": true, 00:25:53.669 "data_offset": 2048, 00:25:53.669 "data_size": 63488 00:25:53.669 }, 00:25:53.669 { 00:25:53.669 "name": "BaseBdev4", 00:25:53.669 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:53.669 "is_configured": true, 00:25:53.669 "data_offset": 2048, 00:25:53.669 "data_size": 63488 00:25:53.669 } 00:25:53.669 ] 00:25:53.669 }' 00:25:53.669 13:11:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:53.669 13:11:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.669 13:11:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:53.669 13:11:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.669 13:11:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:54.602 13:11:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:54.602 13:11:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.602 13:11:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:54.602 13:11:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:54.602 13:11:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:54.602 13:11:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:54.602 13:11:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.602 13:11:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.861 13:11:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:54.861 "name": "raid_bdev1", 00:25:54.861 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:54.861 "strip_size_kb": 64, 00:25:54.861 "state": "online", 00:25:54.861 "raid_level": "raid5f", 00:25:54.861 "superblock": true, 00:25:54.861 "num_base_bdevs": 4, 00:25:54.861 "num_base_bdevs_discovered": 4, 00:25:54.861 "num_base_bdevs_operational": 4, 00:25:54.861 "process": { 00:25:54.861 "type": "rebuild", 00:25:54.861 "target": "spare", 00:25:54.861 "progress": { 00:25:54.861 "blocks": 182400, 00:25:54.861 "percent": 95 00:25:54.861 } 00:25:54.861 }, 00:25:54.861 "base_bdevs_list": [ 00:25:54.861 { 00:25:54.861 "name": "spare", 00:25:54.861 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:54.861 "is_configured": true, 00:25:54.861 "data_offset": 2048, 00:25:54.861 "data_size": 63488 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "name": "BaseBdev2", 00:25:54.861 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:54.861 "is_configured": true, 00:25:54.861 "data_offset": 2048, 00:25:54.861 "data_size": 63488 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "name": "BaseBdev3", 00:25:54.861 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:54.861 "is_configured": true, 00:25:54.861 "data_offset": 2048, 00:25:54.861 "data_size": 63488 00:25:54.861 }, 00:25:54.861 { 00:25:54.861 "name": "BaseBdev4", 00:25:54.861 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:54.861 "is_configured": true, 00:25:54.861 "data_offset": 2048, 00:25:54.861 "data_size": 63488 00:25:54.861 } 00:25:54.861 ] 00:25:54.861 }' 00:25:54.861 13:11:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:55.120 13:11:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:55.120 13:11:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:55.120 13:11:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.120 13:11:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:55.378 [2024-06-11 13:11:14.044276] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:55.378 [2024-06-11 13:11:14.044701] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:55.378 [2024-06-11 13:11:14.045527] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.945 13:11:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:55.945 13:11:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.945 13:11:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:55.945 13:11:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:55.945 13:11:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:55.945 13:11:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:55.945 13:11:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.945 13:11:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:56.513 "name": "raid_bdev1", 00:25:56.513 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:56.513 "strip_size_kb": 64, 00:25:56.513 "state": "online", 00:25:56.513 "raid_level": "raid5f", 00:25:56.513 "superblock": true, 00:25:56.513 "num_base_bdevs": 4, 00:25:56.513 "num_base_bdevs_discovered": 4, 00:25:56.513 "num_base_bdevs_operational": 4, 00:25:56.513 "base_bdevs_list": [ 00:25:56.513 { 00:25:56.513 "name": "spare", 00:25:56.513 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:56.513 "is_configured": true, 00:25:56.513 "data_offset": 2048, 00:25:56.513 "data_size": 63488 00:25:56.513 }, 00:25:56.513 { 00:25:56.513 "name": "BaseBdev2", 00:25:56.513 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:56.513 "is_configured": true, 00:25:56.513 "data_offset": 2048, 00:25:56.513 "data_size": 63488 00:25:56.513 }, 00:25:56.513 { 00:25:56.513 "name": "BaseBdev3", 00:25:56.513 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:56.513 "is_configured": true, 00:25:56.513 "data_offset": 2048, 00:25:56.513 "data_size": 63488 00:25:56.513 }, 00:25:56.513 { 00:25:56.513 "name": "BaseBdev4", 00:25:56.513 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:56.513 "is_configured": true, 00:25:56.513 "data_offset": 2048, 00:25:56.513 "data_size": 63488 00:25:56.513 } 00:25:56.513 ] 00:25:56.513 }' 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@660 -- # break 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.513 13:11:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:56.772 "name": "raid_bdev1", 00:25:56.772 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:56.772 "strip_size_kb": 64, 00:25:56.772 "state": "online", 00:25:56.772 "raid_level": "raid5f", 00:25:56.772 "superblock": true, 00:25:56.772 "num_base_bdevs": 4, 00:25:56.772 "num_base_bdevs_discovered": 4, 00:25:56.772 "num_base_bdevs_operational": 4, 00:25:56.772 "base_bdevs_list": [ 00:25:56.772 { 00:25:56.772 "name": "spare", 00:25:56.772 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:56.772 "is_configured": true, 00:25:56.772 "data_offset": 2048, 00:25:56.772 "data_size": 63488 00:25:56.772 }, 00:25:56.772 { 00:25:56.772 "name": "BaseBdev2", 00:25:56.772 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:56.772 "is_configured": true, 00:25:56.772 "data_offset": 2048, 00:25:56.772 "data_size": 63488 00:25:56.772 }, 00:25:56.772 { 00:25:56.772 "name": "BaseBdev3", 00:25:56.772 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:56.772 "is_configured": true, 00:25:56.772 "data_offset": 2048, 00:25:56.772 "data_size": 63488 00:25:56.772 }, 00:25:56.772 { 00:25:56.772 "name": "BaseBdev4", 00:25:56.772 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:56.772 "is_configured": true, 00:25:56.772 "data_offset": 2048, 00:25:56.772 "data_size": 63488 00:25:56.772 } 00:25:56.772 ] 00:25:56.772 }' 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.772 13:11:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.031 13:11:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:57.031 "name": "raid_bdev1", 00:25:57.031 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:25:57.031 "strip_size_kb": 64, 00:25:57.031 "state": "online", 00:25:57.031 "raid_level": "raid5f", 00:25:57.031 "superblock": true, 00:25:57.031 "num_base_bdevs": 4, 00:25:57.031 "num_base_bdevs_discovered": 4, 00:25:57.031 "num_base_bdevs_operational": 4, 00:25:57.031 "base_bdevs_list": [ 00:25:57.031 { 00:25:57.031 "name": "spare", 00:25:57.031 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:25:57.031 "is_configured": true, 00:25:57.031 "data_offset": 2048, 00:25:57.031 "data_size": 63488 00:25:57.031 }, 00:25:57.031 { 00:25:57.031 "name": "BaseBdev2", 00:25:57.031 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:25:57.031 "is_configured": true, 00:25:57.031 "data_offset": 2048, 00:25:57.031 "data_size": 63488 00:25:57.031 }, 00:25:57.031 { 00:25:57.031 "name": "BaseBdev3", 00:25:57.031 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:25:57.031 "is_configured": true, 00:25:57.031 "data_offset": 2048, 00:25:57.031 "data_size": 63488 00:25:57.031 }, 00:25:57.031 { 00:25:57.031 "name": "BaseBdev4", 00:25:57.031 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:25:57.031 "is_configured": true, 00:25:57.031 "data_offset": 2048, 00:25:57.031 "data_size": 63488 00:25:57.031 } 00:25:57.031 ] 00:25:57.031 }' 00:25:57.031 13:11:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:57.031 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:25:57.968 13:11:16 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:57.968 [2024-06-11 13:11:16.666890] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:57.968 [2024-06-11 13:11:16.667099] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:57.968 [2024-06-11 13:11:16.667327] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:57.968 [2024-06-11 13:11:16.667559] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:57.968 [2024-06-11 13:11:16.667682] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:57.968 13:11:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.968 13:11:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:58.227 13:11:16 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:58.227 13:11:16 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:58.227 13:11:16 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@12 -- # local i 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:58.227 13:11:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:58.486 /dev/nbd0 00:25:58.486 13:11:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:58.486 13:11:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:58.486 13:11:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:58.486 13:11:17 -- common/autotest_common.sh@857 -- # local i 00:25:58.486 13:11:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:58.486 13:11:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:58.486 13:11:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:58.486 13:11:17 -- common/autotest_common.sh@861 -- # break 00:25:58.486 13:11:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:58.486 13:11:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:58.486 13:11:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.486 1+0 records in 00:25:58.486 1+0 records out 00:25:58.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284603 s, 14.4 MB/s 00:25:58.486 13:11:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.486 13:11:17 -- common/autotest_common.sh@874 -- # size=4096 00:25:58.486 13:11:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.486 13:11:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:58.486 13:11:17 -- common/autotest_common.sh@877 -- # return 0 00:25:58.486 13:11:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.486 13:11:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:58.486 13:11:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:58.744 /dev/nbd1 00:25:58.744 13:11:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:58.744 13:11:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:58.744 13:11:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:58.745 13:11:17 -- common/autotest_common.sh@857 -- # local i 00:25:58.745 13:11:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:58.745 13:11:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:58.745 13:11:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:58.745 13:11:17 -- common/autotest_common.sh@861 -- # break 00:25:58.745 13:11:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:58.745 13:11:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:58.745 13:11:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.745 1+0 records in 00:25:58.745 1+0 records out 00:25:58.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366102 s, 11.2 MB/s 00:25:58.745 13:11:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.745 13:11:17 -- common/autotest_common.sh@874 -- # size=4096 00:25:58.745 13:11:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.745 13:11:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:58.745 13:11:17 -- common/autotest_common.sh@877 -- # return 0 00:25:58.745 13:11:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.745 13:11:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:58.745 13:11:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:58.745 13:11:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:58.745 13:11:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:58.745 13:11:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:58.745 13:11:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:58.745 13:11:17 -- bdev/nbd_common.sh@51 -- # local i 00:25:58.745 13:11:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:58.745 13:11:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:59.003 13:11:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:59.003 13:11:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:59.003 13:11:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:59.003 13:11:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.003 13:11:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.003 13:11:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:59.003 13:11:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:59.262 13:11:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:59.262 13:11:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.262 13:11:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:59.262 13:11:17 -- bdev/nbd_common.sh@41 -- # break 00:25:59.262 13:11:17 -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.262 13:11:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.262 13:11:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@41 -- # break 00:25:59.521 13:11:18 -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.521 13:11:18 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:59.521 13:11:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:59.521 13:11:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:59.521 13:11:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:59.780 13:11:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:00.038 [2024-06-11 13:11:18.792728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:00.038 [2024-06-11 13:11:18.793005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.038 [2024-06-11 13:11:18.793102] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:00.038 [2024-06-11 13:11:18.793434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.038 [2024-06-11 13:11:18.795914] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.038 [2024-06-11 13:11:18.796107] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:00.038 [2024-06-11 13:11:18.796317] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:00.038 [2024-06-11 13:11:18.796491] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:00.038 BaseBdev1 00:26:00.038 13:11:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:00.038 13:11:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:00.038 13:11:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:00.297 13:11:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:00.556 [2024-06-11 13:11:19.185354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:00.556 [2024-06-11 13:11:19.185604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.556 [2024-06-11 13:11:19.185683] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:00.556 [2024-06-11 13:11:19.186011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.556 [2024-06-11 13:11:19.186486] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.556 [2024-06-11 13:11:19.186705] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:00.556 [2024-06-11 13:11:19.186890] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:00.556 [2024-06-11 13:11:19.187040] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:00.556 [2024-06-11 13:11:19.187135] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:00.556 [2024-06-11 13:11:19.187188] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:26:00.556 [2024-06-11 13:11:19.187337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:00.556 BaseBdev2 00:26:00.556 13:11:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:00.556 13:11:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:00.556 13:11:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:00.814 13:11:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:00.814 [2024-06-11 13:11:19.617446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:00.814 [2024-06-11 13:11:19.617661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.814 [2024-06-11 13:11:19.617727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:00.814 [2024-06-11 13:11:19.618026] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.814 [2024-06-11 13:11:19.618551] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.814 [2024-06-11 13:11:19.618734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:00.814 [2024-06-11 13:11:19.618917] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:00.814 [2024-06-11 13:11:19.619038] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:00.814 BaseBdev3 00:26:00.814 13:11:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:00.814 13:11:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:00.814 13:11:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:01.073 13:11:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:01.332 [2024-06-11 13:11:20.041515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:01.332 [2024-06-11 13:11:20.041723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.332 [2024-06-11 13:11:20.041789] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:26:01.332 [2024-06-11 13:11:20.042028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.332 [2024-06-11 13:11:20.042549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.332 [2024-06-11 13:11:20.042722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:01.332 [2024-06-11 13:11:20.042936] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:01.332 [2024-06-11 13:11:20.043066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:01.332 BaseBdev4 00:26:01.332 13:11:20 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:01.590 13:11:20 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:01.849 [2024-06-11 13:11:20.445634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:01.849 [2024-06-11 13:11:20.445848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.849 [2024-06-11 13:11:20.445915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:26:01.849 [2024-06-11 13:11:20.446071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.849 [2024-06-11 13:11:20.446546] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.849 [2024-06-11 13:11:20.446734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:01.849 [2024-06-11 13:11:20.446941] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:01.849 [2024-06-11 13:11:20.447073] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:01.849 spare 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.849 [2024-06-11 13:11:20.547295] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:26:01.849 [2024-06-11 13:11:20.547435] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:01.849 [2024-06-11 13:11:20.547594] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d200 00:26:01.849 [2024-06-11 13:11:20.552861] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:26:01.849 [2024-06-11 13:11:20.552992] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:26:01.849 [2024-06-11 13:11:20.553248] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:01.849 13:11:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:01.849 "name": "raid_bdev1", 00:26:01.849 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:26:01.849 "strip_size_kb": 64, 00:26:01.849 "state": "online", 00:26:01.849 "raid_level": "raid5f", 00:26:01.849 "superblock": true, 00:26:01.849 "num_base_bdevs": 4, 00:26:01.849 "num_base_bdevs_discovered": 4, 00:26:01.849 "num_base_bdevs_operational": 4, 00:26:01.849 "base_bdevs_list": [ 00:26:01.849 { 00:26:01.849 "name": "spare", 00:26:01.849 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:26:01.849 "is_configured": true, 00:26:01.849 "data_offset": 2048, 00:26:01.849 "data_size": 63488 00:26:01.849 }, 00:26:01.849 { 00:26:01.849 "name": "BaseBdev2", 00:26:01.849 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:26:01.849 "is_configured": true, 00:26:01.849 "data_offset": 2048, 00:26:01.849 "data_size": 63488 00:26:01.849 }, 00:26:01.849 { 00:26:01.849 "name": "BaseBdev3", 00:26:01.849 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:26:01.849 "is_configured": true, 00:26:01.849 "data_offset": 2048, 00:26:01.849 "data_size": 63488 00:26:01.849 }, 00:26:01.849 { 00:26:01.849 "name": "BaseBdev4", 00:26:01.849 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:26:01.849 "is_configured": true, 00:26:01.849 "data_offset": 2048, 00:26:01.849 "data_size": 63488 00:26:01.849 } 00:26:01.850 ] 00:26:01.850 }' 00:26:01.850 13:11:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:01.850 13:11:20 -- common/autotest_common.sh@10 -- # set +x 00:26:02.785 13:11:21 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:02.785 13:11:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:02.785 13:11:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:02.785 13:11:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:02.785 13:11:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:02.785 13:11:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.785 13:11:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.785 13:11:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:02.785 "name": "raid_bdev1", 00:26:02.785 "uuid": "4a4d3a63-397e-4e09-925a-bea5f79a1428", 00:26:02.785 "strip_size_kb": 64, 00:26:02.785 "state": "online", 00:26:02.785 "raid_level": "raid5f", 00:26:02.785 "superblock": true, 00:26:02.785 "num_base_bdevs": 4, 00:26:02.785 "num_base_bdevs_discovered": 4, 00:26:02.785 "num_base_bdevs_operational": 4, 00:26:02.785 "base_bdevs_list": [ 00:26:02.785 { 00:26:02.785 "name": "spare", 00:26:02.785 "uuid": "e942d076-4779-5db5-a8c5-12cd95eb472d", 00:26:02.785 "is_configured": true, 00:26:02.785 "data_offset": 2048, 00:26:02.785 "data_size": 63488 00:26:02.785 }, 00:26:02.785 { 00:26:02.785 "name": "BaseBdev2", 00:26:02.785 "uuid": "4aa24334-f6ce-5201-84d9-1c20a7e4e19c", 00:26:02.785 "is_configured": true, 00:26:02.785 "data_offset": 2048, 00:26:02.785 "data_size": 63488 00:26:02.785 }, 00:26:02.785 { 00:26:02.785 "name": "BaseBdev3", 00:26:02.785 "uuid": "7d2a0b21-c32a-5449-a2f1-937f63655520", 00:26:02.785 "is_configured": true, 00:26:02.786 "data_offset": 2048, 00:26:02.786 "data_size": 63488 00:26:02.786 }, 00:26:02.786 { 00:26:02.786 "name": "BaseBdev4", 00:26:02.786 "uuid": "15d5ec9c-fb58-58e4-999f-f51d92fe01c2", 00:26:02.786 "is_configured": true, 00:26:02.786 "data_offset": 2048, 00:26:02.786 "data_size": 63488 00:26:02.786 } 00:26:02.786 ] 00:26:02.786 }' 00:26:02.786 13:11:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:03.044 13:11:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:03.044 13:11:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:03.044 13:11:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:03.044 13:11:21 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.044 13:11:21 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:03.378 13:11:21 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.378 13:11:21 -- bdev/bdev_raid.sh@709 -- # killprocess 135539 00:26:03.378 13:11:21 -- common/autotest_common.sh@926 -- # '[' -z 135539 ']' 00:26:03.378 13:11:21 -- common/autotest_common.sh@930 -- # kill -0 135539 00:26:03.378 13:11:21 -- common/autotest_common.sh@931 -- # uname 00:26:03.378 13:11:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:03.378 13:11:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135539 00:26:03.378 killing process with pid 135539 00:26:03.378 Received shutdown signal, test time was about 60.000000 seconds 00:26:03.378 00:26:03.378 Latency(us) 00:26:03.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.378 =================================================================================================================== 00:26:03.378 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:03.378 13:11:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:03.378 13:11:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:03.378 13:11:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135539' 00:26:03.378 13:11:21 -- common/autotest_common.sh@945 -- # kill 135539 00:26:03.378 13:11:21 -- common/autotest_common.sh@950 -- # wait 135539 00:26:03.379 [2024-06-11 13:11:21.956245] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:03.379 [2024-06-11 13:11:21.956305] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.379 [2024-06-11 13:11:21.956375] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:03.379 [2024-06-11 13:11:21.956427] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:26:03.637 [2024-06-11 13:11:22.296490] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:04.568 ************************************ 00:26:04.568 END TEST raid5f_rebuild_test_sb 00:26:04.568 ************************************ 00:26:04.568 13:11:23 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:04.568 00:26:04.568 real 0m29.631s 00:26:04.568 user 0m45.373s 00:26:04.568 sys 0m3.037s 00:26:04.568 13:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.568 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:26:04.568 13:11:23 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:26:04.568 ************************************ 00:26:04.568 END TEST bdev_raid 00:26:04.568 ************************************ 00:26:04.568 00:26:04.568 real 12m4.850s 00:26:04.568 user 20m10.155s 00:26:04.568 sys 1m25.834s 00:26:04.568 13:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.568 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:26:04.827 13:11:23 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:04.827 13:11:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:04.827 13:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:04.827 13:11:23 -- common/autotest_common.sh@10 -- # set +x 00:26:04.827 ************************************ 00:26:04.827 START TEST bdevperf_config 00:26:04.827 ************************************ 00:26:04.827 13:11:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:04.827 * Looking for test storage... 00:26:04.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:26:04.827 13:11:23 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:04.827 13:11:23 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:04.827 13:11:23 -- bdevperf/common.sh@9 -- # local rw=read 00:26:04.827 13:11:23 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:04.827 13:11:23 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:04.827 13:11:23 -- bdevperf/common.sh@13 -- # cat 00:26:04.827 13:11:23 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:04.827 13:11:23 -- bdevperf/common.sh@19 -- # echo 00:26:04.827 00:26:04.827 13:11:23 -- bdevperf/common.sh@20 -- # cat 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:04.827 13:11:23 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:04.827 13:11:23 -- bdevperf/common.sh@9 -- # local rw= 00:26:04.827 13:11:23 -- bdevperf/common.sh@10 -- # local filename= 00:26:04.827 13:11:23 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:04.827 13:11:23 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:04.827 13:11:23 -- bdevperf/common.sh@19 -- # echo 00:26:04.827 00:26:04.827 13:11:23 -- bdevperf/common.sh@20 -- # cat 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:04.827 13:11:23 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:04.827 13:11:23 -- bdevperf/common.sh@9 -- # local rw= 00:26:04.827 13:11:23 -- bdevperf/common.sh@10 -- # local filename= 00:26:04.827 13:11:23 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:04.827 13:11:23 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:04.827 13:11:23 -- bdevperf/common.sh@19 -- # echo 00:26:04.827 00:26:04.827 13:11:23 -- bdevperf/common.sh@20 -- # cat 00:26:04.827 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:04.827 13:11:23 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:04.827 13:11:23 -- bdevperf/common.sh@9 -- # local rw= 00:26:04.827 13:11:23 -- bdevperf/common.sh@10 -- # local filename= 00:26:04.827 13:11:23 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:04.827 13:11:23 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:04.827 13:11:23 -- bdevperf/common.sh@19 -- # echo 00:26:04.827 13:11:23 -- bdevperf/common.sh@20 -- # cat 00:26:04.827 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:04.827 13:11:23 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:04.827 13:11:23 -- bdevperf/common.sh@9 -- # local rw= 00:26:04.827 13:11:23 -- bdevperf/common.sh@10 -- # local filename= 00:26:04.827 13:11:23 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:04.827 13:11:23 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:04.827 13:11:23 -- bdevperf/common.sh@19 -- # echo 00:26:04.827 13:11:23 -- bdevperf/common.sh@20 -- # cat 00:26:04.827 13:11:23 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:09.012 13:11:27 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-06-11 13:11:23.583772] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:09.012 [2024-06-11 13:11:23.583932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136357 ] 00:26:09.012 Using job config with 4 jobs 00:26:09.012 [2024-06-11 13:11:23.738366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.012 [2024-06-11 13:11:23.931929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.012 cpumask for '\''job0'\'' is too big 00:26:09.012 cpumask for '\''job1'\'' is too big 00:26:09.012 cpumask for '\''job2'\'' is too big 00:26:09.012 cpumask for '\''job3'\'' is too big 00:26:09.012 Running I/O for 2 seconds... 00:26:09.012 00:26:09.012 Latency(us) 00:26:09.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.01 33254.56 32.48 0.00 0.00 7689.29 1474.56 11975.21 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.02 33264.05 32.48 0.00 0.00 7674.40 1377.75 10545.34 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.02 33242.21 32.46 0.00 0.00 7667.43 1407.53 9115.46 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.02 33220.82 32.44 0.00 0.00 7659.25 1377.75 8221.79 00:26:09.012 =================================================================================================================== 00:26:09.012 Total : 132981.64 129.86 0.00 0.00 7672.58 1377.75 11975.21' 00:26:09.012 13:11:27 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-06-11 13:11:23.583772] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:09.012 [2024-06-11 13:11:23.583932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136357 ] 00:26:09.012 Using job config with 4 jobs 00:26:09.012 [2024-06-11 13:11:23.738366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.012 [2024-06-11 13:11:23.931929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.012 cpumask for '\''job0'\'' is too big 00:26:09.012 cpumask for '\''job1'\'' is too big 00:26:09.012 cpumask for '\''job2'\'' is too big 00:26:09.012 cpumask for '\''job3'\'' is too big 00:26:09.012 Running I/O for 2 seconds... 00:26:09.012 00:26:09.012 Latency(us) 00:26:09.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.01 33254.56 32.48 0.00 0.00 7689.29 1474.56 11975.21 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.02 33264.05 32.48 0.00 0.00 7674.40 1377.75 10545.34 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.02 33242.21 32.46 0.00 0.00 7667.43 1407.53 9115.46 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.02 33220.82 32.44 0.00 0.00 7659.25 1377.75 8221.79 00:26:09.012 =================================================================================================================== 00:26:09.012 Total : 132981.64 129.86 0.00 0.00 7672.58 1377.75 11975.21' 00:26:09.012 13:11:27 -- bdevperf/common.sh@32 -- # echo '[2024-06-11 13:11:23.583772] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:09.012 [2024-06-11 13:11:23.583932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136357 ] 00:26:09.012 Using job config with 4 jobs 00:26:09.012 [2024-06-11 13:11:23.738366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.012 [2024-06-11 13:11:23.931929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.012 cpumask for '\''job0'\'' is too big 00:26:09.012 cpumask for '\''job1'\'' is too big 00:26:09.012 cpumask for '\''job2'\'' is too big 00:26:09.012 cpumask for '\''job3'\'' is too big 00:26:09.012 Running I/O for 2 seconds... 00:26:09.012 00:26:09.012 Latency(us) 00:26:09.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.01 33254.56 32.48 0.00 0.00 7689.29 1474.56 11975.21 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.02 33264.05 32.48 0.00 0.00 7674.40 1377.75 10545.34 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.012 Malloc0 : 2.02 33242.21 32.46 0.00 0.00 7667.43 1407.53 9115.46 00:26:09.012 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:09.013 Malloc0 : 2.02 33220.82 32.44 0.00 0.00 7659.25 1377.75 8221.79 00:26:09.013 =================================================================================================================== 00:26:09.013 Total : 132981.64 129.86 0.00 0.00 7672.58 1377.75 11975.21' 00:26:09.013 13:11:27 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:09.013 13:11:27 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:09.013 13:11:27 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:26:09.013 13:11:27 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:09.013 [2024-06-11 13:11:27.711206] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:09.013 [2024-06-11 13:11:27.711601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136422 ] 00:26:09.271 [2024-06-11 13:11:27.879489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.271 [2024-06-11 13:11:28.088680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.839 cpumask for 'job0' is too big 00:26:09.839 cpumask for 'job1' is too big 00:26:09.839 cpumask for 'job2' is too big 00:26:09.839 cpumask for 'job3' is too big 00:26:13.127 13:11:31 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:13.127 Running I/O for 2 seconds... 00:26:13.127 00:26:13.127 Latency(us) 00:26:13.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:13.127 Malloc0 : 2.01 33194.51 32.42 0.00 0.00 7702.27 1482.01 11975.21 00:26:13.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:13.127 Malloc0 : 2.01 33170.10 32.39 0.00 0.00 7694.00 1377.75 10545.34 00:26:13.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:13.127 Malloc0 : 2.02 33147.18 32.37 0.00 0.00 7686.32 1414.98 9175.04 00:26:13.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:13.127 Malloc0 : 2.02 33121.88 32.35 0.00 0.00 7680.49 1385.19 8281.37 00:26:13.127 =================================================================================================================== 00:26:13.127 Total : 132633.67 129.53 0.00 0.00 7690.77 1377.75 11975.21' 00:26:13.127 13:11:31 -- bdevperf/test_config.sh@27 -- # cleanup 00:26:13.127 13:11:31 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:13.127 00:26:13.127 13:11:31 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:13.127 13:11:31 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:13.127 13:11:31 -- bdevperf/common.sh@9 -- # local rw=write 00:26:13.127 13:11:31 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:13.127 13:11:31 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:13.127 13:11:31 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:13.127 13:11:31 -- bdevperf/common.sh@19 -- # echo 00:26:13.127 13:11:31 -- bdevperf/common.sh@20 -- # cat 00:26:13.127 13:11:31 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:13.127 13:11:31 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:13.127 00:26:13.127 13:11:31 -- bdevperf/common.sh@9 -- # local rw=write 00:26:13.127 13:11:31 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:13.127 13:11:31 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:13.127 13:11:31 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:13.127 13:11:31 -- bdevperf/common.sh@19 -- # echo 00:26:13.127 13:11:31 -- bdevperf/common.sh@20 -- # cat 00:26:13.127 00:26:13.127 13:11:31 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:13.127 13:11:31 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:13.127 13:11:31 -- bdevperf/common.sh@9 -- # local rw=write 00:26:13.127 13:11:31 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:13.127 13:11:31 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:13.127 13:11:31 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:13.127 13:11:31 -- bdevperf/common.sh@19 -- # echo 00:26:13.127 13:11:31 -- bdevperf/common.sh@20 -- # cat 00:26:13.127 13:11:31 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:17.313 13:11:35 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-06-11 13:11:31.856703] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:17.313 [2024-06-11 13:11:31.856876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136474 ] 00:26:17.313 Using job config with 3 jobs 00:26:17.313 [2024-06-11 13:11:32.004862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.313 [2024-06-11 13:11:32.202867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.313 cpumask for '\''job0'\'' is too big 00:26:17.313 cpumask for '\''job1'\'' is too big 00:26:17.313 cpumask for '\''job2'\'' is too big 00:26:17.313 Running I/O for 2 seconds... 00:26:17.313 00:26:17.313 Latency(us) 00:26:17.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 43995.00 42.96 0.00 0.00 5813.09 1519.24 8579.26 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 44002.80 42.97 0.00 0.00 5802.17 1377.75 7208.96 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 43972.30 42.94 0.00 0.00 5796.52 1422.43 6494.02 00:26:17.313 =================================================================================================================== 00:26:17.313 Total : 131970.09 128.88 0.00 0.00 5803.92 1377.75 8579.26' 00:26:17.313 13:11:35 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-06-11 13:11:31.856703] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:17.313 [2024-06-11 13:11:31.856876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136474 ] 00:26:17.313 Using job config with 3 jobs 00:26:17.313 [2024-06-11 13:11:32.004862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.313 [2024-06-11 13:11:32.202867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.313 cpumask for '\''job0'\'' is too big 00:26:17.313 cpumask for '\''job1'\'' is too big 00:26:17.313 cpumask for '\''job2'\'' is too big 00:26:17.313 Running I/O for 2 seconds... 00:26:17.313 00:26:17.313 Latency(us) 00:26:17.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 43995.00 42.96 0.00 0.00 5813.09 1519.24 8579.26 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 44002.80 42.97 0.00 0.00 5802.17 1377.75 7208.96 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 43972.30 42.94 0.00 0.00 5796.52 1422.43 6494.02 00:26:17.313 =================================================================================================================== 00:26:17.313 Total : 131970.09 128.88 0.00 0.00 5803.92 1377.75 8579.26' 00:26:17.313 13:11:35 -- bdevperf/common.sh@32 -- # echo '[2024-06-11 13:11:31.856703] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:17.313 [2024-06-11 13:11:31.856876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136474 ] 00:26:17.313 Using job config with 3 jobs 00:26:17.313 [2024-06-11 13:11:32.004862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.313 [2024-06-11 13:11:32.202867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.313 cpumask for '\''job0'\'' is too big 00:26:17.313 cpumask for '\''job1'\'' is too big 00:26:17.313 cpumask for '\''job2'\'' is too big 00:26:17.313 Running I/O for 2 seconds... 00:26:17.313 00:26:17.313 Latency(us) 00:26:17.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 43995.00 42.96 0.00 0.00 5813.09 1519.24 8579.26 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 44002.80 42.97 0.00 0.00 5802.17 1377.75 7208.96 00:26:17.313 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:17.313 Malloc0 : 2.01 43972.30 42.94 0.00 0.00 5796.52 1422.43 6494.02 00:26:17.313 =================================================================================================================== 00:26:17.313 Total : 131970.09 128.88 0.00 0.00 5803.92 1377.75 8579.26' 00:26:17.313 13:11:35 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:17.313 13:11:35 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:17.313 13:11:35 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:17.313 13:11:35 -- bdevperf/test_config.sh@35 -- # cleanup 00:26:17.313 13:11:35 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:17.313 13:11:35 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:17.313 13:11:35 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:17.313 13:11:35 -- bdevperf/common.sh@9 -- # local rw=rw 00:26:17.313 13:11:35 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:17.313 13:11:35 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:17.313 13:11:35 -- bdevperf/common.sh@13 -- # cat 00:26:17.313 00:26:17.313 13:11:35 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:17.313 13:11:35 -- bdevperf/common.sh@19 -- # echo 00:26:17.313 13:11:35 -- bdevperf/common.sh@20 -- # cat 00:26:17.314 00:26:17.314 13:11:35 -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:17.314 13:11:35 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:17.314 13:11:35 -- bdevperf/common.sh@9 -- # local rw= 00:26:17.314 13:11:35 -- bdevperf/common.sh@10 -- # local filename= 00:26:17.314 13:11:35 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:17.314 13:11:35 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:17.314 13:11:35 -- bdevperf/common.sh@19 -- # echo 00:26:17.314 13:11:35 -- bdevperf/common.sh@20 -- # cat 00:26:17.314 00:26:17.314 13:11:35 -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:17.314 13:11:35 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:17.314 13:11:35 -- bdevperf/common.sh@9 -- # local rw= 00:26:17.314 13:11:35 -- bdevperf/common.sh@10 -- # local filename= 00:26:17.314 13:11:35 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:17.314 13:11:35 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:17.314 13:11:35 -- bdevperf/common.sh@19 -- # echo 00:26:17.314 13:11:35 -- bdevperf/common.sh@20 -- # cat 00:26:17.314 13:11:35 -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:17.314 13:11:35 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:17.314 13:11:35 -- bdevperf/common.sh@9 -- # local rw= 00:26:17.314 13:11:35 -- bdevperf/common.sh@10 -- # local filename= 00:26:17.314 13:11:35 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:17.314 13:11:35 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:17.314 13:11:35 -- bdevperf/common.sh@19 -- # echo 00:26:17.314 00:26:17.314 13:11:35 -- bdevperf/common.sh@20 -- # cat 00:26:17.314 13:11:35 -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:17.314 13:11:35 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:17.314 13:11:35 -- bdevperf/common.sh@9 -- # local rw= 00:26:17.314 13:11:35 -- bdevperf/common.sh@10 -- # local filename= 00:26:17.314 13:11:35 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:17.314 13:11:35 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:17.314 13:11:35 -- bdevperf/common.sh@19 -- # echo 00:26:17.314 00:26:17.314 13:11:35 -- bdevperf/common.sh@20 -- # cat 00:26:17.314 13:11:35 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:21.562 13:11:40 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-06-11 13:11:36.006467] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:21.562 [2024-06-11 13:11:36.006683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136551 ] 00:26:21.562 Using job config with 4 jobs 00:26:21.562 [2024-06-11 13:11:36.171909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.562 [2024-06-11 13:11:36.379957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.562 cpumask for '\''job0'\'' is too big 00:26:21.562 cpumask for '\''job1'\'' is too big 00:26:21.562 cpumask for '\''job2'\'' is too big 00:26:21.562 cpumask for '\''job3'\'' is too big 00:26:21.562 Running I/O for 2 seconds... 00:26:21.562 00:26:21.562 Latency(us) 00:26:21.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.562 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.562 Malloc0 : 2.02 16626.86 16.24 0.00 0.00 15388.98 3068.28 24307.90 00:26:21.562 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.562 Malloc1 : 2.03 16631.12 16.24 0.00 0.00 15376.68 3544.90 24427.05 00:26:21.562 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.562 Malloc0 : 2.03 16620.41 16.23 0.00 0.00 15346.26 2829.96 21567.30 00:26:21.562 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.562 Malloc1 : 2.03 16609.42 16.22 0.00 0.00 15344.05 3351.27 21567.30 00:26:21.562 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.562 Malloc0 : 2.04 16598.67 16.21 0.00 0.00 15316.50 2964.01 18588.39 00:26:21.562 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.562 Malloc1 : 2.04 16587.69 16.20 0.00 0.00 15314.60 3395.96 18469.24 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.04 16577.06 16.19 0.00 0.00 15285.54 2993.80 15966.95 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.04 16566.02 16.18 0.00 0.00 15280.20 3455.53 15966.95 00:26:21.563 =================================================================================================================== 00:26:21.563 Total : 132817.25 129.70 0.00 0.00 15331.55 2829.96 24427.05' 00:26:21.563 13:11:40 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-06-11 13:11:36.006467] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:21.563 [2024-06-11 13:11:36.006683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136551 ] 00:26:21.563 Using job config with 4 jobs 00:26:21.563 [2024-06-11 13:11:36.171909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.563 [2024-06-11 13:11:36.379957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.563 cpumask for '\''job0'\'' is too big 00:26:21.563 cpumask for '\''job1'\'' is too big 00:26:21.563 cpumask for '\''job2'\'' is too big 00:26:21.563 cpumask for '\''job3'\'' is too big 00:26:21.563 Running I/O for 2 seconds... 00:26:21.563 00:26:21.563 Latency(us) 00:26:21.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.02 16626.86 16.24 0.00 0.00 15388.98 3068.28 24307.90 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.03 16631.12 16.24 0.00 0.00 15376.68 3544.90 24427.05 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.03 16620.41 16.23 0.00 0.00 15346.26 2829.96 21567.30 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.03 16609.42 16.22 0.00 0.00 15344.05 3351.27 21567.30 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.04 16598.67 16.21 0.00 0.00 15316.50 2964.01 18588.39 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.04 16587.69 16.20 0.00 0.00 15314.60 3395.96 18469.24 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.04 16577.06 16.19 0.00 0.00 15285.54 2993.80 15966.95 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.04 16566.02 16.18 0.00 0.00 15280.20 3455.53 15966.95 00:26:21.563 =================================================================================================================== 00:26:21.563 Total : 132817.25 129.70 0.00 0.00 15331.55 2829.96 24427.05' 00:26:21.563 13:11:40 -- bdevperf/common.sh@32 -- # echo '[2024-06-11 13:11:36.006467] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:21.563 [2024-06-11 13:11:36.006683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136551 ] 00:26:21.563 Using job config with 4 jobs 00:26:21.563 [2024-06-11 13:11:36.171909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.563 [2024-06-11 13:11:36.379957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.563 cpumask for '\''job0'\'' is too big 00:26:21.563 cpumask for '\''job1'\'' is too big 00:26:21.563 cpumask for '\''job2'\'' is too big 00:26:21.563 cpumask for '\''job3'\'' is too big 00:26:21.563 Running I/O for 2 seconds... 00:26:21.563 00:26:21.563 Latency(us) 00:26:21.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.02 16626.86 16.24 0.00 0.00 15388.98 3068.28 24307.90 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.03 16631.12 16.24 0.00 0.00 15376.68 3544.90 24427.05 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.03 16620.41 16.23 0.00 0.00 15346.26 2829.96 21567.30 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.03 16609.42 16.22 0.00 0.00 15344.05 3351.27 21567.30 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.04 16598.67 16.21 0.00 0.00 15316.50 2964.01 18588.39 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.04 16587.69 16.20 0.00 0.00 15314.60 3395.96 18469.24 00:26:21.563 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc0 : 2.04 16577.06 16.19 0.00 0.00 15285.54 2993.80 15966.95 00:26:21.563 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:21.563 Malloc1 : 2.04 16566.02 16.18 0.00 0.00 15280.20 3455.53 15966.95 00:26:21.563 =================================================================================================================== 00:26:21.563 Total : 132817.25 129.70 0.00 0.00 15331.55 2829.96 24427.05' 00:26:21.563 13:11:40 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:21.563 13:11:40 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:21.563 13:11:40 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:21.563 13:11:40 -- bdevperf/test_config.sh@44 -- # cleanup 00:26:21.563 13:11:40 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:21.563 13:11:40 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:21.563 ************************************ 00:26:21.563 END TEST bdevperf_config 00:26:21.563 ************************************ 00:26:21.563 00:26:21.563 real 0m16.677s 00:26:21.563 user 0m14.822s 00:26:21.563 sys 0m1.265s 00:26:21.563 13:11:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.563 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.563 13:11:40 -- spdk/autotest.sh@198 -- # uname -s 00:26:21.563 13:11:40 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:26:21.563 13:11:40 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:21.563 13:11:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.563 13:11:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.563 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.563 ************************************ 00:26:21.563 START TEST reactor_set_interrupt 00:26:21.563 ************************************ 00:26:21.563 13:11:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:21.563 * Looking for test storage... 00:26:21.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:21.563 13:11:40 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:21.563 13:11:40 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:21.563 13:11:40 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:21.563 13:11:40 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:21.563 13:11:40 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:21.563 13:11:40 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:21.563 13:11:40 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:21.563 13:11:40 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:21.563 13:11:40 -- common/autotest_common.sh@34 -- # set -e 00:26:21.563 13:11:40 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:21.563 13:11:40 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:21.563 13:11:40 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:21.563 13:11:40 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:21.563 13:11:40 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:21.563 13:11:40 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:21.563 13:11:40 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:21.563 13:11:40 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:21.563 13:11:40 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:21.563 13:11:40 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:21.563 13:11:40 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:21.563 13:11:40 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:21.563 13:11:40 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:21.563 13:11:40 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:21.563 13:11:40 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:21.563 13:11:40 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:21.563 13:11:40 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:21.563 13:11:40 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:21.563 13:11:40 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:21.563 13:11:40 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:21.563 13:11:40 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:21.563 13:11:40 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:21.563 13:11:40 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:21.563 13:11:40 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:21.563 13:11:40 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:21.563 13:11:40 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:21.563 13:11:40 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:21.563 13:11:40 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:21.563 13:11:40 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:21.564 13:11:40 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:21.564 13:11:40 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:21.564 13:11:40 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:21.564 13:11:40 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:21.564 13:11:40 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:21.564 13:11:40 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:21.564 13:11:40 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:21.564 13:11:40 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:21.564 13:11:40 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:21.564 13:11:40 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:21.564 13:11:40 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:21.564 13:11:40 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:21.564 13:11:40 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:21.564 13:11:40 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:21.564 13:11:40 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:21.564 13:11:40 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:21.564 13:11:40 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:21.564 13:11:40 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:21.564 13:11:40 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:21.564 13:11:40 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:21.564 13:11:40 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:21.564 13:11:40 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:21.564 13:11:40 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:21.564 13:11:40 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:21.564 13:11:40 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:21.564 13:11:40 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:21.564 13:11:40 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:21.564 13:11:40 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:21.564 13:11:40 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:21.564 13:11:40 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:21.564 13:11:40 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:21.564 13:11:40 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:21.564 13:11:40 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:21.564 13:11:40 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:21.564 13:11:40 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:21.564 13:11:40 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:21.564 13:11:40 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:21.564 13:11:40 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:21.564 13:11:40 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:21.564 13:11:40 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:21.564 13:11:40 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:21.564 13:11:40 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:21.564 13:11:40 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:21.564 13:11:40 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:21.564 13:11:40 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:21.564 13:11:40 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:21.564 13:11:40 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:21.564 13:11:40 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:21.564 13:11:40 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:21.564 13:11:40 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:21.564 13:11:40 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:21.564 13:11:40 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:21.564 13:11:40 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:21.564 13:11:40 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:21.564 13:11:40 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:21.564 13:11:40 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:21.564 13:11:40 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:21.564 13:11:40 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:21.564 13:11:40 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:21.564 13:11:40 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:21.564 13:11:40 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:21.564 13:11:40 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:21.564 13:11:40 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:21.564 13:11:40 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:21.564 13:11:40 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:21.564 13:11:40 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:21.564 13:11:40 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:21.564 13:11:40 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:21.564 13:11:40 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:21.564 13:11:40 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:21.564 #define SPDK_CONFIG_H 00:26:21.564 #define SPDK_CONFIG_APPS 1 00:26:21.564 #define SPDK_CONFIG_ARCH native 00:26:21.564 #define SPDK_CONFIG_ASAN 1 00:26:21.564 #undef SPDK_CONFIG_AVAHI 00:26:21.564 #undef SPDK_CONFIG_CET 00:26:21.564 #define SPDK_CONFIG_COVERAGE 1 00:26:21.564 #define SPDK_CONFIG_CROSS_PREFIX 00:26:21.564 #undef SPDK_CONFIG_CRYPTO 00:26:21.564 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:21.564 #undef SPDK_CONFIG_CUSTOMOCF 00:26:21.564 #undef SPDK_CONFIG_DAOS 00:26:21.564 #define SPDK_CONFIG_DAOS_DIR 00:26:21.564 #define SPDK_CONFIG_DEBUG 1 00:26:21.564 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:21.564 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:21.564 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:21.564 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:21.564 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:21.564 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:21.564 #define SPDK_CONFIG_EXAMPLES 1 00:26:21.564 #undef SPDK_CONFIG_FC 00:26:21.564 #define SPDK_CONFIG_FC_PATH 00:26:21.564 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:21.564 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:21.564 #undef SPDK_CONFIG_FUSE 00:26:21.564 #undef SPDK_CONFIG_FUZZER 00:26:21.564 #define SPDK_CONFIG_FUZZER_LIB 00:26:21.564 #undef SPDK_CONFIG_GOLANG 00:26:21.564 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:21.564 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:21.564 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:21.564 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:21.564 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:21.564 #define SPDK_CONFIG_IDXD 1 00:26:21.564 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:21.564 #undef SPDK_CONFIG_IPSEC_MB 00:26:21.564 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:21.564 #define SPDK_CONFIG_ISAL 1 00:26:21.564 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:21.564 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:21.564 #define SPDK_CONFIG_LIBDIR 00:26:21.564 #undef SPDK_CONFIG_LTO 00:26:21.564 #define SPDK_CONFIG_MAX_LCORES 00:26:21.564 #define SPDK_CONFIG_NVME_CUSE 1 00:26:21.564 #undef SPDK_CONFIG_OCF 00:26:21.564 #define SPDK_CONFIG_OCF_PATH 00:26:21.564 #define SPDK_CONFIG_OPENSSL_PATH 00:26:21.564 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:21.564 #undef SPDK_CONFIG_PGO_USE 00:26:21.564 #define SPDK_CONFIG_PREFIX /usr/local 00:26:21.564 #define SPDK_CONFIG_RAID5F 1 00:26:21.564 #undef SPDK_CONFIG_RBD 00:26:21.564 #define SPDK_CONFIG_RDMA 1 00:26:21.564 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:21.564 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:21.564 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:21.564 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:21.564 #undef SPDK_CONFIG_SHARED 00:26:21.564 #undef SPDK_CONFIG_SMA 00:26:21.564 #define SPDK_CONFIG_TESTS 1 00:26:21.564 #undef SPDK_CONFIG_TSAN 00:26:21.564 #undef SPDK_CONFIG_UBLK 00:26:21.564 #define SPDK_CONFIG_UBSAN 1 00:26:21.564 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:21.564 #undef SPDK_CONFIG_URING 00:26:21.564 #define SPDK_CONFIG_URING_PATH 00:26:21.564 #undef SPDK_CONFIG_URING_ZNS 00:26:21.564 #undef SPDK_CONFIG_USDT 00:26:21.564 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:21.564 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:21.564 #undef SPDK_CONFIG_VFIO_USER 00:26:21.564 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:21.564 #define SPDK_CONFIG_VHOST 1 00:26:21.564 #define SPDK_CONFIG_VIRTIO 1 00:26:21.564 #undef SPDK_CONFIG_VTUNE 00:26:21.564 #define SPDK_CONFIG_VTUNE_DIR 00:26:21.564 #define SPDK_CONFIG_WERROR 1 00:26:21.564 #define SPDK_CONFIG_WPDK_DIR 00:26:21.564 #undef SPDK_CONFIG_XNVME 00:26:21.564 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:21.564 13:11:40 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:21.564 13:11:40 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:21.564 13:11:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.564 13:11:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.564 13:11:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.564 13:11:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.564 13:11:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.564 13:11:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.564 13:11:40 -- paths/export.sh@5 -- # export PATH 00:26:21.565 13:11:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.565 13:11:40 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:21.565 13:11:40 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:21.565 13:11:40 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:21.565 13:11:40 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:21.565 13:11:40 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:21.565 13:11:40 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:21.565 13:11:40 -- pm/common@16 -- # TEST_TAG=N/A 00:26:21.565 13:11:40 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:21.565 13:11:40 -- common/autotest_common.sh@52 -- # : 1 00:26:21.565 13:11:40 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:21.565 13:11:40 -- common/autotest_common.sh@56 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:21.565 13:11:40 -- common/autotest_common.sh@58 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:21.565 13:11:40 -- common/autotest_common.sh@60 -- # : 1 00:26:21.565 13:11:40 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:21.565 13:11:40 -- common/autotest_common.sh@62 -- # : 1 00:26:21.565 13:11:40 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:21.565 13:11:40 -- common/autotest_common.sh@64 -- # : 00:26:21.565 13:11:40 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:21.565 13:11:40 -- common/autotest_common.sh@66 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:21.565 13:11:40 -- common/autotest_common.sh@68 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:21.565 13:11:40 -- common/autotest_common.sh@70 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:21.565 13:11:40 -- common/autotest_common.sh@72 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:21.565 13:11:40 -- common/autotest_common.sh@74 -- # : 1 00:26:21.565 13:11:40 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:21.565 13:11:40 -- common/autotest_common.sh@76 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:21.565 13:11:40 -- common/autotest_common.sh@78 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:21.565 13:11:40 -- common/autotest_common.sh@80 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:21.565 13:11:40 -- common/autotest_common.sh@82 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:21.565 13:11:40 -- common/autotest_common.sh@84 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:21.565 13:11:40 -- common/autotest_common.sh@86 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:21.565 13:11:40 -- common/autotest_common.sh@88 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:21.565 13:11:40 -- common/autotest_common.sh@90 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:21.565 13:11:40 -- common/autotest_common.sh@92 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:21.565 13:11:40 -- common/autotest_common.sh@94 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:21.565 13:11:40 -- common/autotest_common.sh@96 -- # : rdma 00:26:21.565 13:11:40 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:21.565 13:11:40 -- common/autotest_common.sh@98 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:21.565 13:11:40 -- common/autotest_common.sh@100 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:21.565 13:11:40 -- common/autotest_common.sh@102 -- # : 1 00:26:21.565 13:11:40 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:21.565 13:11:40 -- common/autotest_common.sh@104 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:21.565 13:11:40 -- common/autotest_common.sh@106 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:21.565 13:11:40 -- common/autotest_common.sh@108 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:21.565 13:11:40 -- common/autotest_common.sh@110 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:21.565 13:11:40 -- common/autotest_common.sh@112 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:21.565 13:11:40 -- common/autotest_common.sh@114 -- # : 1 00:26:21.565 13:11:40 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:21.565 13:11:40 -- common/autotest_common.sh@116 -- # : 1 00:26:21.565 13:11:40 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:21.565 13:11:40 -- common/autotest_common.sh@118 -- # : 00:26:21.565 13:11:40 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:21.565 13:11:40 -- common/autotest_common.sh@120 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:21.565 13:11:40 -- common/autotest_common.sh@122 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:21.565 13:11:40 -- common/autotest_common.sh@124 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:21.565 13:11:40 -- common/autotest_common.sh@126 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:21.565 13:11:40 -- common/autotest_common.sh@128 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:21.565 13:11:40 -- common/autotest_common.sh@130 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:21.565 13:11:40 -- common/autotest_common.sh@132 -- # : 00:26:21.565 13:11:40 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:21.565 13:11:40 -- common/autotest_common.sh@134 -- # : true 00:26:21.565 13:11:40 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:21.565 13:11:40 -- common/autotest_common.sh@136 -- # : 1 00:26:21.565 13:11:40 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:21.565 13:11:40 -- common/autotest_common.sh@138 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:21.565 13:11:40 -- common/autotest_common.sh@140 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:21.565 13:11:40 -- common/autotest_common.sh@142 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:21.565 13:11:40 -- common/autotest_common.sh@144 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:21.565 13:11:40 -- common/autotest_common.sh@146 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:21.565 13:11:40 -- common/autotest_common.sh@148 -- # : 00:26:21.565 13:11:40 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:21.565 13:11:40 -- common/autotest_common.sh@150 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:21.565 13:11:40 -- common/autotest_common.sh@152 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:21.565 13:11:40 -- common/autotest_common.sh@154 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:21.565 13:11:40 -- common/autotest_common.sh@156 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:21.565 13:11:40 -- common/autotest_common.sh@158 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:21.565 13:11:40 -- common/autotest_common.sh@160 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:21.565 13:11:40 -- common/autotest_common.sh@163 -- # : 00:26:21.565 13:11:40 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:21.565 13:11:40 -- common/autotest_common.sh@165 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:21.565 13:11:40 -- common/autotest_common.sh@167 -- # : 0 00:26:21.565 13:11:40 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:21.565 13:11:40 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:21.565 13:11:40 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:21.565 13:11:40 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:21.565 13:11:40 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:21.565 13:11:40 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:21.565 13:11:40 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:21.565 13:11:40 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:21.565 13:11:40 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:21.565 13:11:40 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:21.565 13:11:40 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:21.565 13:11:40 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:21.565 13:11:40 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:21.565 13:11:40 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:21.565 13:11:40 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:21.565 13:11:40 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:21.566 13:11:40 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:21.566 13:11:40 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:21.566 13:11:40 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:21.566 13:11:40 -- common/autotest_common.sh@196 -- # cat 00:26:21.566 13:11:40 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:21.566 13:11:40 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:21.566 13:11:40 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:21.566 13:11:40 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:21.566 13:11:40 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:21.566 13:11:40 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:21.566 13:11:40 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:21.566 13:11:40 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:21.566 13:11:40 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:21.566 13:11:40 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:21.566 13:11:40 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:21.566 13:11:40 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:21.566 13:11:40 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:21.566 13:11:40 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:21.566 13:11:40 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:21.566 13:11:40 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:21.566 13:11:40 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:21.566 13:11:40 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:21.566 13:11:40 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:21.566 13:11:40 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:21.566 13:11:40 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:21.566 13:11:40 -- common/autotest_common.sh@249 -- # valgrind= 00:26:21.566 13:11:40 -- common/autotest_common.sh@255 -- # uname -s 00:26:21.566 13:11:40 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:21.566 13:11:40 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:21.566 13:11:40 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:21.566 13:11:40 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:21.566 13:11:40 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:21.566 13:11:40 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:21.566 13:11:40 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:21.566 13:11:40 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:21.566 13:11:40 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:21.566 13:11:40 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:21.566 13:11:40 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:21.566 13:11:40 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:21.566 13:11:40 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:21.566 13:11:40 -- common/autotest_common.sh@309 -- # [[ -z 136642 ]] 00:26:21.566 13:11:40 -- common/autotest_common.sh@309 -- # kill -0 136642 00:26:21.566 13:11:40 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:21.566 13:11:40 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:21.566 13:11:40 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:21.566 13:11:40 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:21.566 13:11:40 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:21.566 13:11:40 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:21.566 13:11:40 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:21.566 13:11:40 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:21.566 13:11:40 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.sNzQwf 00:26:21.566 13:11:40 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:21.566 13:11:40 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:21.566 13:11:40 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:21.566 13:11:40 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.sNzQwf/tests/interrupt /tmp/spdk.sNzQwf 00:26:21.566 13:11:40 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:21.566 13:11:40 -- common/autotest_common.sh@318 -- # df -T 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224461824 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224461824 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=10612318208 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=9987698688 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269968384 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272561152 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272561152 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272561152 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=97170456576 00:26:21.566 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:21.566 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=2532323328 00:26:21.566 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.566 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:26:21.567 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:21.567 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:21.567 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:26:21.567 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:26:21.567 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.567 13:11:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:26:21.567 13:11:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:21.567 13:11:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:21.567 13:11:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:21.567 13:11:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:21.567 13:11:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:21.567 13:11:40 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:21.567 * Looking for test storage... 00:26:21.567 13:11:40 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:21.567 13:11:40 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:21.567 13:11:40 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:21.567 13:11:40 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:21.567 13:11:40 -- common/autotest_common.sh@363 -- # mount=/ 00:26:21.567 13:11:40 -- common/autotest_common.sh@365 -- # target_space=10612318208 00:26:21.567 13:11:40 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:21.567 13:11:40 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:21.567 13:11:40 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:21.567 13:11:40 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:21.567 13:11:40 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:21.567 13:11:40 -- common/autotest_common.sh@372 -- # new_size=12202291200 00:26:21.567 13:11:40 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:21.567 13:11:40 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:21.567 13:11:40 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:21.567 13:11:40 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:21.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:21.567 13:11:40 -- common/autotest_common.sh@380 -- # return 0 00:26:21.567 13:11:40 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:21.567 13:11:40 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:21.567 13:11:40 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:21.567 13:11:40 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:21.567 13:11:40 -- common/autotest_common.sh@1672 -- # true 00:26:21.567 13:11:40 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:21.567 13:11:40 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:21.567 13:11:40 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:21.567 13:11:40 -- common/autotest_common.sh@27 -- # exec 00:26:21.567 13:11:40 -- common/autotest_common.sh@29 -- # exec 00:26:21.567 13:11:40 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:21.567 13:11:40 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:21.567 13:11:40 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:21.567 13:11:40 -- common/autotest_common.sh@18 -- # set -x 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:21.567 13:11:40 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:21.567 13:11:40 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:21.567 13:11:40 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136682 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:21.567 13:11:40 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136682 /var/tmp/spdk.sock 00:26:21.567 13:11:40 -- common/autotest_common.sh@819 -- # '[' -z 136682 ']' 00:26:21.567 13:11:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.567 13:11:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:21.567 13:11:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.567 13:11:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:21.567 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.825 [2024-06-11 13:11:40.406546] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:21.825 [2024-06-11 13:11:40.406956] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136682 ] 00:26:21.825 [2024-06-11 13:11:40.585330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:22.084 [2024-06-11 13:11:40.767498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.084 [2024-06-11 13:11:40.767614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.084 [2024-06-11 13:11:40.767622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.342 [2024-06-11 13:11:41.044479] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:22.600 13:11:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:22.600 13:11:41 -- common/autotest_common.sh@852 -- # return 0 00:26:22.600 13:11:41 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:22.600 13:11:41 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:22.857 Malloc0 00:26:22.857 Malloc1 00:26:22.857 Malloc2 00:26:22.857 13:11:41 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:22.857 13:11:41 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:22.857 13:11:41 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:22.857 13:11:41 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:22.857 5000+0 records in 00:26:22.857 5000+0 records out 00:26:22.857 10240000 bytes (10 MB, 9.8 MiB) copied, 0.024719 s, 414 MB/s 00:26:22.857 13:11:41 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:23.424 AIO0 00:26:23.424 13:11:41 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 136682 00:26:23.424 13:11:41 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 136682 without_thd 00:26:23.424 13:11:41 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136682 00:26:23.424 13:11:41 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:23.424 13:11:41 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:23.424 13:11:41 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:23.424 13:11:41 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:23.424 13:11:41 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:23.424 13:11:41 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:23.424 13:11:41 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:23.424 13:11:41 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:23.424 13:11:41 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:23.424 13:11:42 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:23.424 13:11:42 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:23.424 13:11:42 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:23.424 13:11:42 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:23.424 13:11:42 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:23.424 13:11:42 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:23.424 13:11:42 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:23.424 13:11:42 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:23.424 13:11:42 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:23.683 spdk_thread ids are 1 on reactor0. 00:26:23.683 13:11:42 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:23.683 13:11:42 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:23.683 13:11:42 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:23.683 13:11:42 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136682 0 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136682 0 idle 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@33 -- # local pid=136682 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136682 -w 256 00:26:23.683 13:11:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136682 root 20 0 20.1t 145668 28788 S 0.0 1.2 0:00.70 reactor_0' 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@48 -- # echo 136682 root 20 0 20.1t 145668 28788 S 0.0 1.2 0:00.70 reactor_0 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:23.942 13:11:42 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:23.942 13:11:42 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136682 1 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136682 1 idle 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@33 -- # local pid=136682 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136682 -w 256 00:26:23.942 13:11:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136685 root 20 0 20.1t 145668 28788 S 0.0 1.2 0:00.00 reactor_1' 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@48 -- # echo 136685 root 20 0 20.1t 145668 28788 S 0.0 1.2 0:00.00 reactor_1 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:24.201 13:11:42 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:24.201 13:11:42 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136682 2 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136682 2 idle 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@33 -- # local pid=136682 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136682 -w 256 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136686 root 20 0 20.1t 145668 28788 S 0.0 1.2 0:00.00 reactor_2' 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@48 -- # echo 136686 root 20 0 20.1t 145668 28788 S 0.0 1.2 0:00.00 reactor_2 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:24.201 13:11:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:24.201 13:11:42 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:24.201 13:11:42 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:24.201 13:11:42 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:24.461 [2024-06-11 13:11:43.213126] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:24.461 13:11:43 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:24.719 [2024-06-11 13:11:43.460850] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:24.719 [2024-06-11 13:11:43.461795] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:24.719 13:11:43 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:24.978 [2024-06-11 13:11:43.712735] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:24.978 [2024-06-11 13:11:43.713574] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:24.978 13:11:43 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:24.978 13:11:43 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136682 0 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136682 0 busy 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@33 -- # local pid=136682 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136682 -w 256 00:26:24.978 13:11:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136682 root 20 0 20.1t 145780 28788 R 99.9 1.2 0:01.14 reactor_0' 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@48 -- # echo 136682 root 20 0 20.1t 145780 28788 R 99.9 1.2 0:01.14 reactor_0 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:25.236 13:11:43 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:25.236 13:11:43 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136682 2 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136682 2 busy 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@33 -- # local pid=136682 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136682 -w 256 00:26:25.236 13:11:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136686 root 20 0 20.1t 145780 28788 R 99.9 1.2 0:00.33 reactor_2' 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@48 -- # echo 136686 root 20 0 20.1t 145780 28788 R 99.9 1.2 0:00.33 reactor_2 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:25.236 13:11:44 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:25.236 13:11:44 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:25.493 [2024-06-11 13:11:44.320618] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:25.493 [2024-06-11 13:11:44.321217] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:25.751 13:11:44 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:25.751 13:11:44 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136682 2 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136682 2 idle 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@33 -- # local pid=136682 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136682 -w 256 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136686 root 20 0 20.1t 145844 28788 S 0.0 1.2 0:00.60 reactor_2' 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@48 -- # echo 136686 root 20 0 20.1t 145844 28788 S 0.0 1.2 0:00.60 reactor_2 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:25.751 13:11:44 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:25.751 13:11:44 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:26.009 [2024-06-11 13:11:44.740596] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:26.009 [2024-06-11 13:11:44.741185] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:26.009 13:11:44 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:26.009 13:11:44 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:26.009 13:11:44 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:26.268 [2024-06-11 13:11:44.985039] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:26.268 13:11:44 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136682 0 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136682 0 idle 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@33 -- # local pid=136682 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:26.268 13:11:44 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136682 -w 256 00:26:26.268 13:11:45 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:26.526 13:11:45 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136682 root 20 0 20.1t 145936 28788 S 0.0 1.2 0:02.00 reactor_0' 00:26:26.526 13:11:45 -- interrupt/interrupt_common.sh@48 -- # echo 136682 root 20 0 20.1t 145936 28788 S 0.0 1.2 0:02.00 reactor_0 00:26:26.526 13:11:45 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:26.526 13:11:45 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:26.526 13:11:45 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:26.526 13:11:45 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:26.526 13:11:45 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:26.526 13:11:45 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:26.527 13:11:45 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:26.527 13:11:45 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:26.527 13:11:45 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:26.527 13:11:45 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:26.527 13:11:45 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:26.527 13:11:45 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 136682 00:26:26.527 13:11:45 -- common/autotest_common.sh@926 -- # '[' -z 136682 ']' 00:26:26.527 13:11:45 -- common/autotest_common.sh@930 -- # kill -0 136682 00:26:26.527 13:11:45 -- common/autotest_common.sh@931 -- # uname 00:26:26.527 13:11:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:26.527 13:11:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136682 00:26:26.527 13:11:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:26.527 13:11:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:26.527 13:11:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136682' 00:26:26.527 killing process with pid 136682 00:26:26.527 13:11:45 -- common/autotest_common.sh@945 -- # kill 136682 00:26:26.527 13:11:45 -- common/autotest_common.sh@950 -- # wait 136682 00:26:27.902 13:11:46 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:27.902 13:11:46 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:27.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.902 13:11:46 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:27.902 13:11:46 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.902 13:11:46 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:27.902 13:11:46 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136849 00:26:27.902 13:11:46 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:27.902 13:11:46 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:27.902 13:11:46 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136849 /var/tmp/spdk.sock 00:26:27.902 13:11:46 -- common/autotest_common.sh@819 -- # '[' -z 136849 ']' 00:26:27.902 13:11:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.902 13:11:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:27.902 13:11:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.902 13:11:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:27.902 13:11:46 -- common/autotest_common.sh@10 -- # set +x 00:26:27.902 [2024-06-11 13:11:46.484476] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:27.902 [2024-06-11 13:11:46.485053] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136849 ] 00:26:27.902 [2024-06-11 13:11:46.672717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:28.160 [2024-06-11 13:11:46.857076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.160 [2024-06-11 13:11:46.857217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.160 [2024-06-11 13:11:46.857213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.419 [2024-06-11 13:11:47.145756] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:28.677 13:11:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.677 13:11:47 -- common/autotest_common.sh@852 -- # return 0 00:26:28.677 13:11:47 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:28.677 13:11:47 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:28.935 Malloc0 00:26:28.935 Malloc1 00:26:28.935 Malloc2 00:26:28.935 13:11:47 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:28.935 13:11:47 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:28.935 13:11:47 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:28.935 13:11:47 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:28.935 5000+0 records in 00:26:28.935 5000+0 records out 00:26:28.935 10240000 bytes (10 MB, 9.8 MiB) copied, 0.024162 s, 424 MB/s 00:26:28.935 13:11:47 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:29.193 AIO0 00:26:29.193 13:11:48 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 136849 00:26:29.193 13:11:48 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 136849 00:26:29.193 13:11:48 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136849 00:26:29.193 13:11:48 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:29.193 13:11:48 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:29.193 13:11:48 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:29.193 13:11:48 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:29.193 13:11:48 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:29.193 13:11:48 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:29.193 13:11:48 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:29.193 13:11:48 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:29.193 13:11:48 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:29.452 13:11:48 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:29.452 13:11:48 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:29.452 13:11:48 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:29.452 13:11:48 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:29.452 13:11:48 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:29.452 13:11:48 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:29.452 13:11:48 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:29.452 13:11:48 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:29.452 13:11:48 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:29.711 spdk_thread ids are 1 on reactor0. 00:26:29.711 13:11:48 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:29.711 13:11:48 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:29.711 13:11:48 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:29.711 13:11:48 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136849 0 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136849 0 idle 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@33 -- # local pid=136849 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136849 -w 256 00:26:29.711 13:11:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136849 root 20 0 20.1t 145628 28724 S 0.0 1.2 0:00.72 reactor_0' 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@48 -- # echo 136849 root 20 0 20.1t 145628 28724 S 0.0 1.2 0:00.72 reactor_0 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:29.969 13:11:48 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:29.969 13:11:48 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136849 1 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136849 1 idle 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@33 -- # local pid=136849 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:29.969 13:11:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136849 -w 256 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136853 root 20 0 20.1t 145628 28724 S 0.0 1.2 0:00.00 reactor_1' 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@48 -- # echo 136853 root 20 0 20.1t 145628 28724 S 0.0 1.2 0:00.00 reactor_1 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:29.970 13:11:48 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:29.970 13:11:48 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136849 2 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136849 2 idle 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@33 -- # local pid=136849 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136849 -w 256 00:26:29.970 13:11:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136854 root 20 0 20.1t 145628 28724 S 0.0 1.2 0:00.00 reactor_2' 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@48 -- # echo 136854 root 20 0 20.1t 145628 28724 S 0.0 1.2 0:00.00 reactor_2 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:30.228 13:11:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:30.229 13:11:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:30.229 13:11:48 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:30.229 13:11:48 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:30.487 [2024-06-11 13:11:49.146009] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:30.487 [2024-06-11 13:11:49.146298] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:30.487 [2024-06-11 13:11:49.147223] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:30.487 13:11:49 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:30.746 [2024-06-11 13:11:49.393895] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:30.746 [2024-06-11 13:11:49.394705] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:30.746 13:11:49 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:30.746 13:11:49 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136849 0 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136849 0 busy 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@33 -- # local pid=136849 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136849 -w 256 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136849 root 20 0 20.1t 145704 28724 R 99.9 1.2 0:01.14 reactor_0' 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@48 -- # echo 136849 root 20 0 20.1t 145704 28724 R 99.9 1.2 0:01.14 reactor_0 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:30.746 13:11:49 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:30.746 13:11:49 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136849 2 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136849 2 busy 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@33 -- # local pid=136849 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136849 -w 256 00:26:30.746 13:11:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136854 root 20 0 20.1t 145704 28724 R 99.9 1.2 0:00.33 reactor_2' 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@48 -- # echo 136854 root 20 0 20.1t 145704 28724 R 99.9 1.2 0:00.33 reactor_2 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:31.004 13:11:49 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:31.005 13:11:49 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:31.263 [2024-06-11 13:11:49.986333] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:31.263 [2024-06-11 13:11:49.986795] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:31.263 13:11:50 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:31.263 13:11:50 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136849 2 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136849 2 idle 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@33 -- # local pid=136849 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136849 -w 256 00:26:31.263 13:11:50 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136854 root 20 0 20.1t 145768 28724 S 0.0 1.2 0:00.59 reactor_2' 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@48 -- # echo 136854 root 20 0 20.1t 145768 28724 S 0.0 1.2 0:00.59 reactor_2 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:31.522 13:11:50 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:31.522 13:11:50 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:31.780 [2024-06-11 13:11:50.426312] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:31.780 [2024-06-11 13:11:50.427274] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:31.780 [2024-06-11 13:11:50.427443] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:31.780 13:11:50 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:31.780 13:11:50 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136849 0 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136849 0 idle 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@33 -- # local pid=136849 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136849 -w 256 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136849 root 20 0 20.1t 145808 28724 S 0.0 1.2 0:02.01 reactor_0' 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@48 -- # echo 136849 root 20 0 20.1t 145808 28724 S 0.0 1.2 0:02.01 reactor_0 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:31.780 13:11:50 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:31.780 13:11:50 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:31.780 13:11:50 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:31.780 13:11:50 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:31.780 13:11:50 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 136849 00:26:31.780 13:11:50 -- common/autotest_common.sh@926 -- # '[' -z 136849 ']' 00:26:31.780 13:11:50 -- common/autotest_common.sh@930 -- # kill -0 136849 00:26:31.780 13:11:50 -- common/autotest_common.sh@931 -- # uname 00:26:31.780 13:11:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:31.780 13:11:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136849 00:26:32.039 killing process with pid 136849 00:26:32.039 13:11:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:32.039 13:11:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:32.039 13:11:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136849' 00:26:32.039 13:11:50 -- common/autotest_common.sh@945 -- # kill 136849 00:26:32.039 13:11:50 -- common/autotest_common.sh@950 -- # wait 136849 00:26:33.420 13:11:51 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:33.420 13:11:51 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:33.420 ************************************ 00:26:33.420 END TEST reactor_set_interrupt 00:26:33.420 ************************************ 00:26:33.420 00:26:33.420 real 0m11.722s 00:26:33.420 user 0m12.224s 00:26:33.420 sys 0m1.538s 00:26:33.420 13:11:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.420 13:11:51 -- common/autotest_common.sh@10 -- # set +x 00:26:33.420 13:11:51 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:33.420 13:11:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:33.420 13:11:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:33.420 13:11:51 -- common/autotest_common.sh@10 -- # set +x 00:26:33.420 ************************************ 00:26:33.420 START TEST reap_unregistered_poller 00:26:33.420 ************************************ 00:26:33.420 13:11:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:33.420 * Looking for test storage... 00:26:33.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.420 13:11:52 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:33.420 13:11:52 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:33.420 13:11:52 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.420 13:11:52 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.420 13:11:52 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:33.420 13:11:52 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:33.420 13:11:52 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:33.420 13:11:52 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:33.420 13:11:52 -- common/autotest_common.sh@34 -- # set -e 00:26:33.420 13:11:52 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:33.420 13:11:52 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:33.420 13:11:52 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:33.420 13:11:52 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:33.420 13:11:52 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:33.420 13:11:52 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:33.420 13:11:52 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:33.420 13:11:52 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:33.420 13:11:52 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:33.420 13:11:52 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:33.420 13:11:52 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:33.420 13:11:52 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:33.420 13:11:52 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:33.420 13:11:52 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:33.420 13:11:52 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:33.420 13:11:52 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:33.420 13:11:52 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:33.420 13:11:52 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:33.420 13:11:52 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:33.420 13:11:52 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:33.420 13:11:52 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:33.420 13:11:52 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:33.420 13:11:52 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:33.420 13:11:52 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:33.420 13:11:52 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:33.420 13:11:52 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:33.420 13:11:52 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:33.420 13:11:52 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:33.420 13:11:52 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:33.420 13:11:52 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:33.420 13:11:52 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:33.420 13:11:52 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:33.420 13:11:52 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:33.420 13:11:52 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:33.420 13:11:52 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:33.420 13:11:52 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:33.420 13:11:52 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:33.420 13:11:52 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:33.420 13:11:52 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:33.420 13:11:52 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:33.420 13:11:52 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:33.420 13:11:52 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:33.420 13:11:52 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:33.420 13:11:52 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:33.420 13:11:52 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:33.420 13:11:52 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:33.421 13:11:52 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:33.421 13:11:52 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:33.421 13:11:52 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:33.421 13:11:52 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:33.421 13:11:52 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:33.421 13:11:52 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:33.421 13:11:52 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:33.421 13:11:52 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:33.421 13:11:52 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:33.421 13:11:52 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:33.421 13:11:52 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:33.421 13:11:52 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:33.421 13:11:52 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:33.421 13:11:52 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:33.421 13:11:52 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:33.421 13:11:52 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:33.421 13:11:52 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:33.421 13:11:52 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:33.421 13:11:52 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:33.421 13:11:52 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:33.421 13:11:52 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:33.421 13:11:52 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:33.421 13:11:52 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:33.421 13:11:52 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:33.421 13:11:52 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:33.421 13:11:52 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:33.421 13:11:52 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:33.421 13:11:52 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:33.421 13:11:52 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:33.421 13:11:52 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:33.421 13:11:52 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:33.421 13:11:52 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:33.421 13:11:52 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:33.421 13:11:52 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:33.421 13:11:52 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:33.421 13:11:52 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:33.421 13:11:52 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:33.421 13:11:52 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:33.421 13:11:52 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:33.421 13:11:52 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:33.421 13:11:52 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:33.421 13:11:52 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:33.421 13:11:52 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:33.421 13:11:52 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:33.421 13:11:52 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:33.421 13:11:52 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:33.421 13:11:52 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:33.421 13:11:52 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:33.421 13:11:52 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:33.421 13:11:52 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:33.421 13:11:52 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:33.421 13:11:52 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:33.421 13:11:52 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:33.421 #define SPDK_CONFIG_H 00:26:33.421 #define SPDK_CONFIG_APPS 1 00:26:33.421 #define SPDK_CONFIG_ARCH native 00:26:33.421 #define SPDK_CONFIG_ASAN 1 00:26:33.421 #undef SPDK_CONFIG_AVAHI 00:26:33.421 #undef SPDK_CONFIG_CET 00:26:33.421 #define SPDK_CONFIG_COVERAGE 1 00:26:33.421 #define SPDK_CONFIG_CROSS_PREFIX 00:26:33.421 #undef SPDK_CONFIG_CRYPTO 00:26:33.421 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:33.421 #undef SPDK_CONFIG_CUSTOMOCF 00:26:33.421 #undef SPDK_CONFIG_DAOS 00:26:33.421 #define SPDK_CONFIG_DAOS_DIR 00:26:33.421 #define SPDK_CONFIG_DEBUG 1 00:26:33.421 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:33.421 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:33.421 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:33.421 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:33.421 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:33.421 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:33.421 #define SPDK_CONFIG_EXAMPLES 1 00:26:33.421 #undef SPDK_CONFIG_FC 00:26:33.421 #define SPDK_CONFIG_FC_PATH 00:26:33.421 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:33.421 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:33.421 #undef SPDK_CONFIG_FUSE 00:26:33.421 #undef SPDK_CONFIG_FUZZER 00:26:33.421 #define SPDK_CONFIG_FUZZER_LIB 00:26:33.421 #undef SPDK_CONFIG_GOLANG 00:26:33.421 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:33.421 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:33.421 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:33.421 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:33.421 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:33.421 #define SPDK_CONFIG_IDXD 1 00:26:33.421 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:33.421 #undef SPDK_CONFIG_IPSEC_MB 00:26:33.421 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:33.421 #define SPDK_CONFIG_ISAL 1 00:26:33.421 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:33.421 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:33.421 #define SPDK_CONFIG_LIBDIR 00:26:33.421 #undef SPDK_CONFIG_LTO 00:26:33.421 #define SPDK_CONFIG_MAX_LCORES 00:26:33.421 #define SPDK_CONFIG_NVME_CUSE 1 00:26:33.421 #undef SPDK_CONFIG_OCF 00:26:33.421 #define SPDK_CONFIG_OCF_PATH 00:26:33.421 #define SPDK_CONFIG_OPENSSL_PATH 00:26:33.421 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:33.421 #undef SPDK_CONFIG_PGO_USE 00:26:33.421 #define SPDK_CONFIG_PREFIX /usr/local 00:26:33.421 #define SPDK_CONFIG_RAID5F 1 00:26:33.421 #undef SPDK_CONFIG_RBD 00:26:33.421 #define SPDK_CONFIG_RDMA 1 00:26:33.421 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:33.421 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:33.421 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:33.421 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:33.421 #undef SPDK_CONFIG_SHARED 00:26:33.421 #undef SPDK_CONFIG_SMA 00:26:33.421 #define SPDK_CONFIG_TESTS 1 00:26:33.421 #undef SPDK_CONFIG_TSAN 00:26:33.421 #undef SPDK_CONFIG_UBLK 00:26:33.421 #define SPDK_CONFIG_UBSAN 1 00:26:33.421 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:33.421 #undef SPDK_CONFIG_URING 00:26:33.421 #define SPDK_CONFIG_URING_PATH 00:26:33.421 #undef SPDK_CONFIG_URING_ZNS 00:26:33.421 #undef SPDK_CONFIG_USDT 00:26:33.421 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:33.421 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:33.421 #undef SPDK_CONFIG_VFIO_USER 00:26:33.421 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:33.421 #define SPDK_CONFIG_VHOST 1 00:26:33.421 #define SPDK_CONFIG_VIRTIO 1 00:26:33.421 #undef SPDK_CONFIG_VTUNE 00:26:33.421 #define SPDK_CONFIG_VTUNE_DIR 00:26:33.421 #define SPDK_CONFIG_WERROR 1 00:26:33.421 #define SPDK_CONFIG_WPDK_DIR 00:26:33.421 #undef SPDK_CONFIG_XNVME 00:26:33.421 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:33.421 13:11:52 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:33.421 13:11:52 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:33.421 13:11:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.421 13:11:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.421 13:11:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.421 13:11:52 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:33.421 13:11:52 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:33.421 13:11:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:33.421 13:11:52 -- paths/export.sh@5 -- # export PATH 00:26:33.421 13:11:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:33.421 13:11:52 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:33.421 13:11:52 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:33.421 13:11:52 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:33.421 13:11:52 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:33.421 13:11:52 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:33.421 13:11:52 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:33.422 13:11:52 -- pm/common@16 -- # TEST_TAG=N/A 00:26:33.422 13:11:52 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:33.422 13:11:52 -- common/autotest_common.sh@52 -- # : 1 00:26:33.422 13:11:52 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:33.422 13:11:52 -- common/autotest_common.sh@56 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:33.422 13:11:52 -- common/autotest_common.sh@58 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:33.422 13:11:52 -- common/autotest_common.sh@60 -- # : 1 00:26:33.422 13:11:52 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:33.422 13:11:52 -- common/autotest_common.sh@62 -- # : 1 00:26:33.422 13:11:52 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:33.422 13:11:52 -- common/autotest_common.sh@64 -- # : 00:26:33.422 13:11:52 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:33.422 13:11:52 -- common/autotest_common.sh@66 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:33.422 13:11:52 -- common/autotest_common.sh@68 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:33.422 13:11:52 -- common/autotest_common.sh@70 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:33.422 13:11:52 -- common/autotest_common.sh@72 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:33.422 13:11:52 -- common/autotest_common.sh@74 -- # : 1 00:26:33.422 13:11:52 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:33.422 13:11:52 -- common/autotest_common.sh@76 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:33.422 13:11:52 -- common/autotest_common.sh@78 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:33.422 13:11:52 -- common/autotest_common.sh@80 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:33.422 13:11:52 -- common/autotest_common.sh@82 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:33.422 13:11:52 -- common/autotest_common.sh@84 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:33.422 13:11:52 -- common/autotest_common.sh@86 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:33.422 13:11:52 -- common/autotest_common.sh@88 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:33.422 13:11:52 -- common/autotest_common.sh@90 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:33.422 13:11:52 -- common/autotest_common.sh@92 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:33.422 13:11:52 -- common/autotest_common.sh@94 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:33.422 13:11:52 -- common/autotest_common.sh@96 -- # : rdma 00:26:33.422 13:11:52 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:33.422 13:11:52 -- common/autotest_common.sh@98 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:33.422 13:11:52 -- common/autotest_common.sh@100 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:33.422 13:11:52 -- common/autotest_common.sh@102 -- # : 1 00:26:33.422 13:11:52 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:33.422 13:11:52 -- common/autotest_common.sh@104 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:33.422 13:11:52 -- common/autotest_common.sh@106 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:33.422 13:11:52 -- common/autotest_common.sh@108 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:33.422 13:11:52 -- common/autotest_common.sh@110 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:33.422 13:11:52 -- common/autotest_common.sh@112 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:33.422 13:11:52 -- common/autotest_common.sh@114 -- # : 1 00:26:33.422 13:11:52 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:33.422 13:11:52 -- common/autotest_common.sh@116 -- # : 1 00:26:33.422 13:11:52 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:33.422 13:11:52 -- common/autotest_common.sh@118 -- # : 00:26:33.422 13:11:52 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:33.422 13:11:52 -- common/autotest_common.sh@120 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:33.422 13:11:52 -- common/autotest_common.sh@122 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:33.422 13:11:52 -- common/autotest_common.sh@124 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:33.422 13:11:52 -- common/autotest_common.sh@126 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:33.422 13:11:52 -- common/autotest_common.sh@128 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:33.422 13:11:52 -- common/autotest_common.sh@130 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:33.422 13:11:52 -- common/autotest_common.sh@132 -- # : 00:26:33.422 13:11:52 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:33.422 13:11:52 -- common/autotest_common.sh@134 -- # : true 00:26:33.422 13:11:52 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:33.422 13:11:52 -- common/autotest_common.sh@136 -- # : 1 00:26:33.422 13:11:52 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:33.422 13:11:52 -- common/autotest_common.sh@138 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:33.422 13:11:52 -- common/autotest_common.sh@140 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:33.422 13:11:52 -- common/autotest_common.sh@142 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:33.422 13:11:52 -- common/autotest_common.sh@144 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:33.422 13:11:52 -- common/autotest_common.sh@146 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:33.422 13:11:52 -- common/autotest_common.sh@148 -- # : 00:26:33.422 13:11:52 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:33.422 13:11:52 -- common/autotest_common.sh@150 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:33.422 13:11:52 -- common/autotest_common.sh@152 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:33.422 13:11:52 -- common/autotest_common.sh@154 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:33.422 13:11:52 -- common/autotest_common.sh@156 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:33.422 13:11:52 -- common/autotest_common.sh@158 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:33.422 13:11:52 -- common/autotest_common.sh@160 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:33.422 13:11:52 -- common/autotest_common.sh@163 -- # : 00:26:33.422 13:11:52 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:33.422 13:11:52 -- common/autotest_common.sh@165 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:33.422 13:11:52 -- common/autotest_common.sh@167 -- # : 0 00:26:33.422 13:11:52 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:33.422 13:11:52 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:33.422 13:11:52 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:33.422 13:11:52 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:33.422 13:11:52 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:33.422 13:11:52 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:33.422 13:11:52 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:33.422 13:11:52 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:33.422 13:11:52 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:33.422 13:11:52 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:33.423 13:11:52 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:33.423 13:11:52 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:33.423 13:11:52 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:33.423 13:11:52 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:33.423 13:11:52 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:33.423 13:11:52 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:33.423 13:11:52 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:33.423 13:11:52 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:33.423 13:11:52 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:33.423 13:11:52 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:33.423 13:11:52 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:33.423 13:11:52 -- common/autotest_common.sh@196 -- # cat 00:26:33.423 13:11:52 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:33.423 13:11:52 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:33.423 13:11:52 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:33.423 13:11:52 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:33.423 13:11:52 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:33.423 13:11:52 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:33.423 13:11:52 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:33.423 13:11:52 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:33.423 13:11:52 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:33.423 13:11:52 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:33.423 13:11:52 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:33.423 13:11:52 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:33.423 13:11:52 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:33.423 13:11:52 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:33.423 13:11:52 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:33.423 13:11:52 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:33.423 13:11:52 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:33.423 13:11:52 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:33.423 13:11:52 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:33.423 13:11:52 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:33.423 13:11:52 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:33.423 13:11:52 -- common/autotest_common.sh@249 -- # valgrind= 00:26:33.423 13:11:52 -- common/autotest_common.sh@255 -- # uname -s 00:26:33.423 13:11:52 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:33.423 13:11:52 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:33.423 13:11:52 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:33.423 13:11:52 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:33.423 13:11:52 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:33.423 13:11:52 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:33.423 13:11:52 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:33.423 13:11:52 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:33.423 13:11:52 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:33.423 13:11:52 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:33.423 13:11:52 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:33.423 13:11:52 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:33.423 13:11:52 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:33.423 13:11:52 -- common/autotest_common.sh@309 -- # [[ -z 137018 ]] 00:26:33.423 13:11:52 -- common/autotest_common.sh@309 -- # kill -0 137018 00:26:33.423 13:11:52 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:33.423 13:11:52 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:33.423 13:11:52 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:33.423 13:11:52 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:33.423 13:11:52 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:33.423 13:11:52 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:33.423 13:11:52 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:33.423 13:11:52 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:33.423 13:11:52 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.BLKV3G 00:26:33.423 13:11:52 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:33.423 13:11:52 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:33.423 13:11:52 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:33.423 13:11:52 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.BLKV3G/tests/interrupt /tmp/spdk.BLKV3G 00:26:33.423 13:11:52 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@318 -- # df -T 00:26:33.423 13:11:52 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224461824 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224461824 00:26:33.423 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:26:33.423 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=10612281344 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:33.423 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=9987735552 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269968384 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272561152 00:26:33.423 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:33.423 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272561152 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272561152 00:26:33.423 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:33.423 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:26:33.423 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:33.423 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:26:33.423 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:33.423 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:26:33.424 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:26:33.424 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:26:33.424 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:26:33.424 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:26:33.424 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:33.424 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=97170231296 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:33.424 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=2532548608 00:26:33.424 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:26:33.424 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:26:33.424 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:26:33.424 13:11:52 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:33.424 13:11:52 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:33.424 13:11:52 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:33.424 13:11:52 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.424 13:11:52 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:33.424 * Looking for test storage... 00:26:33.424 13:11:52 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:33.424 13:11:52 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:33.424 13:11:52 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.424 13:11:52 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:33.424 13:11:52 -- common/autotest_common.sh@363 -- # mount=/ 00:26:33.424 13:11:52 -- common/autotest_common.sh@365 -- # target_space=10612281344 00:26:33.424 13:11:52 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:33.424 13:11:52 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:33.424 13:11:52 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:33.424 13:11:52 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:33.424 13:11:52 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:33.424 13:11:52 -- common/autotest_common.sh@372 -- # new_size=12202328064 00:26:33.424 13:11:52 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:33.424 13:11:52 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.424 13:11:52 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.424 13:11:52 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.424 13:11:52 -- common/autotest_common.sh@380 -- # return 0 00:26:33.424 13:11:52 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:33.424 13:11:52 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:33.424 13:11:52 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:33.424 13:11:52 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:33.424 13:11:52 -- common/autotest_common.sh@1672 -- # true 00:26:33.424 13:11:52 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:33.424 13:11:52 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:33.424 13:11:52 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:33.424 13:11:52 -- common/autotest_common.sh@27 -- # exec 00:26:33.424 13:11:52 -- common/autotest_common.sh@29 -- # exec 00:26:33.424 13:11:52 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:33.424 13:11:52 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:33.424 13:11:52 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:33.424 13:11:52 -- common/autotest_common.sh@18 -- # set -x 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:33.424 13:11:52 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:33.424 13:11:52 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:33.424 13:11:52 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=137067 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:33.424 13:11:52 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 137067 /var/tmp/spdk.sock 00:26:33.424 13:11:52 -- common/autotest_common.sh@819 -- # '[' -z 137067 ']' 00:26:33.424 13:11:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.424 13:11:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:33.424 13:11:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.424 13:11:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:33.424 13:11:52 -- common/autotest_common.sh@10 -- # set +x 00:26:33.424 [2024-06-11 13:11:52.186838] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:33.424 [2024-06-11 13:11:52.187223] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137067 ] 00:26:33.683 [2024-06-11 13:11:52.361983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:33.942 [2024-06-11 13:11:52.548990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.942 [2024-06-11 13:11:52.549133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.942 [2024-06-11 13:11:52.549131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.199 [2024-06-11 13:11:52.823252] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:34.457 13:11:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:34.457 13:11:53 -- common/autotest_common.sh@852 -- # return 0 00:26:34.457 13:11:53 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:34.457 13:11:53 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:34.457 13:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.457 13:11:53 -- common/autotest_common.sh@10 -- # set +x 00:26:34.457 13:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.457 13:11:53 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:34.457 "name": "app_thread", 00:26:34.457 "id": 1, 00:26:34.457 "active_pollers": [], 00:26:34.457 "timed_pollers": [ 00:26:34.457 { 00:26:34.457 "name": "rpc_subsystem_poll", 00:26:34.457 "id": 1, 00:26:34.457 "state": "waiting", 00:26:34.457 "run_count": 0, 00:26:34.457 "busy_count": 0, 00:26:34.457 "period_ticks": 8800000 00:26:34.457 } 00:26:34.457 ], 00:26:34.457 "paused_pollers": [] 00:26:34.457 }' 00:26:34.457 13:11:53 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:34.457 13:11:53 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:34.457 13:11:53 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:34.457 13:11:53 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:34.715 13:11:53 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:26:34.715 13:11:53 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:34.715 13:11:53 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:34.715 13:11:53 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:34.715 13:11:53 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:34.715 5000+0 records in 00:26:34.715 5000+0 records out 00:26:34.715 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0203229 s, 504 MB/s 00:26:34.715 13:11:53 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:34.973 AIO0 00:26:34.973 13:11:53 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:35.231 13:11:53 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:35.231 13:11:53 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:35.231 13:11:53 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:35.231 13:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:35.231 13:11:53 -- common/autotest_common.sh@10 -- # set +x 00:26:35.231 13:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:35.231 13:11:54 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:35.231 "name": "app_thread", 00:26:35.231 "id": 1, 00:26:35.231 "active_pollers": [], 00:26:35.231 "timed_pollers": [ 00:26:35.231 { 00:26:35.231 "name": "rpc_subsystem_poll", 00:26:35.231 "id": 1, 00:26:35.231 "state": "waiting", 00:26:35.231 "run_count": 0, 00:26:35.231 "busy_count": 0, 00:26:35.231 "period_ticks": 8800000 00:26:35.231 } 00:26:35.231 ], 00:26:35.231 "paused_pollers": [] 00:26:35.231 }' 00:26:35.231 13:11:54 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:35.489 13:11:54 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:35.489 13:11:54 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:35.489 13:11:54 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:35.489 13:11:54 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:26:35.489 13:11:54 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:26:35.489 13:11:54 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:35.489 13:11:54 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 137067 00:26:35.489 13:11:54 -- common/autotest_common.sh@926 -- # '[' -z 137067 ']' 00:26:35.489 13:11:54 -- common/autotest_common.sh@930 -- # kill -0 137067 00:26:35.489 13:11:54 -- common/autotest_common.sh@931 -- # uname 00:26:35.489 13:11:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:35.489 13:11:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137067 00:26:35.489 killing process with pid 137067 00:26:35.489 13:11:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:35.489 13:11:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:35.490 13:11:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137067' 00:26:35.490 13:11:54 -- common/autotest_common.sh@945 -- # kill 137067 00:26:35.490 13:11:54 -- common/autotest_common.sh@950 -- # wait 137067 00:26:36.424 13:11:55 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:26:36.424 13:11:55 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:36.424 ************************************ 00:26:36.424 END TEST reap_unregistered_poller 00:26:36.424 ************************************ 00:26:36.424 00:26:36.424 real 0m3.269s 00:26:36.424 user 0m2.713s 00:26:36.424 sys 0m0.499s 00:26:36.424 13:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.424 13:11:55 -- common/autotest_common.sh@10 -- # set +x 00:26:36.424 13:11:55 -- spdk/autotest.sh@204 -- # uname -s 00:26:36.424 13:11:55 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:26:36.424 13:11:55 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:26:36.424 13:11:55 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:26:36.425 13:11:55 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:36.425 13:11:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:36.425 13:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:36.425 13:11:55 -- common/autotest_common.sh@10 -- # set +x 00:26:36.425 ************************************ 00:26:36.425 START TEST spdk_dd 00:26:36.425 ************************************ 00:26:36.425 13:11:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:36.682 * Looking for test storage... 00:26:36.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:36.683 13:11:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:36.683 13:11:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.683 13:11:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.683 13:11:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.683 13:11:55 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.683 13:11:55 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.683 13:11:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.683 13:11:55 -- paths/export.sh@5 -- # export PATH 00:26:36.683 13:11:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.683 13:11:55 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:36.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:36.943 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:37.881 13:11:56 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:26:37.881 13:11:56 -- dd/dd.sh@11 -- # nvme_in_userspace 00:26:37.881 13:11:56 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:37.881 13:11:56 -- scripts/common.sh@312 -- # local nvmes 00:26:37.881 13:11:56 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:37.881 13:11:56 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:37.881 13:11:56 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:37.881 13:11:56 -- scripts/common.sh@297 -- # local bdf= 00:26:37.881 13:11:56 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:37.881 13:11:56 -- scripts/common.sh@232 -- # local class 00:26:37.881 13:11:56 -- scripts/common.sh@233 -- # local subclass 00:26:37.881 13:11:56 -- scripts/common.sh@234 -- # local progif 00:26:37.881 13:11:56 -- scripts/common.sh@235 -- # printf %02x 1 00:26:37.881 13:11:56 -- scripts/common.sh@235 -- # class=01 00:26:37.881 13:11:56 -- scripts/common.sh@236 -- # printf %02x 8 00:26:37.881 13:11:56 -- scripts/common.sh@236 -- # subclass=08 00:26:37.881 13:11:56 -- scripts/common.sh@237 -- # printf %02x 2 00:26:37.881 13:11:56 -- scripts/common.sh@237 -- # progif=02 00:26:37.881 13:11:56 -- scripts/common.sh@239 -- # hash lspci 00:26:37.881 13:11:56 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:37.881 13:11:56 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:37.881 13:11:56 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:37.881 13:11:56 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:37.881 13:11:56 -- scripts/common.sh@244 -- # tr -d '"' 00:26:37.881 13:11:56 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:37.881 13:11:56 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:37.881 13:11:56 -- scripts/common.sh@15 -- # local i 00:26:37.881 13:11:56 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:37.881 13:11:56 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:37.881 13:11:56 -- scripts/common.sh@24 -- # return 0 00:26:37.881 13:11:56 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:37.881 13:11:56 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:37.881 13:11:56 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:37.881 13:11:56 -- scripts/common.sh@322 -- # uname -s 00:26:37.881 13:11:56 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:37.881 13:11:56 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:37.881 13:11:56 -- scripts/common.sh@327 -- # (( 1 )) 00:26:37.881 13:11:56 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:26:37.881 13:11:56 -- dd/dd.sh@13 -- # check_liburing 00:26:37.881 13:11:56 -- dd/common.sh@139 -- # local lib so 00:26:37.881 13:11:56 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:26:37.881 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.881 13:11:56 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:26:37.881 13:11:56 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:26:37.882 13:11:56 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.882 13:11:56 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:26:37.882 13:11:56 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:37.882 13:11:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:37.882 13:11:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:37.882 13:11:56 -- common/autotest_common.sh@10 -- # set +x 00:26:37.882 ************************************ 00:26:37.882 START TEST spdk_dd_basic_rw 00:26:37.882 ************************************ 00:26:37.882 13:11:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:37.882 * Looking for test storage... 00:26:38.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:38.141 13:11:56 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.141 13:11:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.141 13:11:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.141 13:11:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.141 13:11:56 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.141 13:11:56 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.141 13:11:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.141 13:11:56 -- paths/export.sh@5 -- # export PATH 00:26:38.141 13:11:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.141 13:11:56 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:26:38.141 13:11:56 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:26:38.141 13:11:56 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:26:38.141 13:11:56 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:26:38.141 13:11:56 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:26:38.141 13:11:56 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:26:38.141 13:11:56 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:26:38.141 13:11:56 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:38.141 13:11:56 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:38.141 13:11:56 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:26:38.141 13:11:56 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:26:38.141 13:11:56 -- dd/common.sh@126 -- # mapfile -t id 00:26:38.141 13:11:56 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:26:38.402 13:11:57 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 101 Data Units Written: 7 Host Read Commands: 2173 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:26:38.402 13:11:57 -- dd/common.sh@130 -- # lbaf=04 00:26:38.403 13:11:57 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 101 Data Units Written: 7 Host Read Commands: 2173 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:26:38.403 13:11:57 -- dd/common.sh@132 -- # lbaf=4096 00:26:38.403 13:11:57 -- dd/common.sh@134 -- # echo 4096 00:26:38.403 13:11:57 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:26:38.403 13:11:57 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:38.403 13:11:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:26:38.403 13:11:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.403 13:11:57 -- common/autotest_common.sh@10 -- # set +x 00:26:38.403 13:11:57 -- dd/basic_rw.sh@96 -- # gen_conf 00:26:38.403 13:11:57 -- dd/basic_rw.sh@96 -- # : 00:26:38.403 13:11:57 -- dd/common.sh@31 -- # xtrace_disable 00:26:38.403 13:11:57 -- common/autotest_common.sh@10 -- # set +x 00:26:38.403 ************************************ 00:26:38.403 START TEST dd_bs_lt_native_bs 00:26:38.403 ************************************ 00:26:38.403 13:11:57 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:38.403 13:11:57 -- common/autotest_common.sh@640 -- # local es=0 00:26:38.403 13:11:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:38.403 13:11:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.403 13:11:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.403 13:11:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.403 13:11:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.403 13:11:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.403 13:11:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.403 13:11:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.403 13:11:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:38.403 13:11:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:38.403 { 00:26:38.403 "subsystems": [ 00:26:38.403 { 00:26:38.403 "subsystem": "bdev", 00:26:38.403 "config": [ 00:26:38.403 { 00:26:38.403 "params": { 00:26:38.403 "trtype": "pcie", 00:26:38.403 "traddr": "0000:00:06.0", 00:26:38.403 "name": "Nvme0" 00:26:38.403 }, 00:26:38.403 "method": "bdev_nvme_attach_controller" 00:26:38.403 }, 00:26:38.403 { 00:26:38.403 "method": "bdev_wait_for_examine" 00:26:38.403 } 00:26:38.403 ] 00:26:38.403 } 00:26:38.403 ] 00:26:38.403 } 00:26:38.403 [2024-06-11 13:11:57.095866] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:38.403 [2024-06-11 13:11:57.096191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137384 ] 00:26:38.662 [2024-06-11 13:11:57.262170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.662 [2024-06-11 13:11:57.418060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.921 [2024-06-11 13:11:57.750062] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:26:38.921 [2024-06-11 13:11:57.750327] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:39.880 [2024-06-11 13:11:58.370459] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:39.880 ************************************ 00:26:39.880 END TEST dd_bs_lt_native_bs 00:26:39.880 ************************************ 00:26:39.880 13:11:58 -- common/autotest_common.sh@643 -- # es=234 00:26:39.880 13:11:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:39.880 13:11:58 -- common/autotest_common.sh@652 -- # es=106 00:26:39.880 13:11:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:39.880 13:11:58 -- common/autotest_common.sh@660 -- # es=1 00:26:39.880 13:11:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:39.880 00:26:39.880 real 0m1.697s 00:26:39.880 user 0m1.421s 00:26:39.880 sys 0m0.240s 00:26:39.880 13:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.880 13:11:58 -- common/autotest_common.sh@10 -- # set +x 00:26:40.138 13:11:58 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:26:40.138 13:11:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:40.138 13:11:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:40.138 13:11:58 -- common/autotest_common.sh@10 -- # set +x 00:26:40.138 ************************************ 00:26:40.138 START TEST dd_rw 00:26:40.138 ************************************ 00:26:40.138 13:11:58 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:26:40.138 13:11:58 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:26:40.138 13:11:58 -- dd/basic_rw.sh@12 -- # local count size 00:26:40.138 13:11:58 -- dd/basic_rw.sh@13 -- # local qds bss 00:26:40.138 13:11:58 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:26:40.138 13:11:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:40.138 13:11:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:40.138 13:11:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:40.138 13:11:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:40.138 13:11:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:40.138 13:11:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:40.138 13:11:58 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:40.138 13:11:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:40.138 13:11:58 -- dd/basic_rw.sh@23 -- # count=15 00:26:40.138 13:11:58 -- dd/basic_rw.sh@24 -- # count=15 00:26:40.138 13:11:58 -- dd/basic_rw.sh@25 -- # size=61440 00:26:40.138 13:11:58 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:40.138 13:11:58 -- dd/common.sh@98 -- # xtrace_disable 00:26:40.138 13:11:58 -- common/autotest_common.sh@10 -- # set +x 00:26:40.705 13:11:59 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:26:40.705 13:11:59 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:40.705 13:11:59 -- dd/common.sh@31 -- # xtrace_disable 00:26:40.705 13:11:59 -- common/autotest_common.sh@10 -- # set +x 00:26:40.705 [2024-06-11 13:11:59.340162] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:40.705 [2024-06-11 13:11:59.340581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137446 ] 00:26:40.705 { 00:26:40.706 "subsystems": [ 00:26:40.706 { 00:26:40.706 "subsystem": "bdev", 00:26:40.706 "config": [ 00:26:40.706 { 00:26:40.706 "params": { 00:26:40.706 "trtype": "pcie", 00:26:40.706 "traddr": "0000:00:06.0", 00:26:40.706 "name": "Nvme0" 00:26:40.706 }, 00:26:40.706 "method": "bdev_nvme_attach_controller" 00:26:40.706 }, 00:26:40.706 { 00:26:40.706 "method": "bdev_wait_for_examine" 00:26:40.706 } 00:26:40.706 ] 00:26:40.706 } 00:26:40.706 ] 00:26:40.706 } 00:26:40.706 [2024-06-11 13:11:59.507343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.964 [2024-06-11 13:11:59.679740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.160  Copying: 60/60 [kB] (average 19 MBps) 00:26:42.160 00:26:42.160 13:12:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:26:42.160 13:12:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:42.160 13:12:00 -- dd/common.sh@31 -- # xtrace_disable 00:26:42.160 13:12:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.160 [2024-06-11 13:12:00.988939] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:42.160 [2024-06-11 13:12:00.989385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137466 ] 00:26:42.160 { 00:26:42.160 "subsystems": [ 00:26:42.160 { 00:26:42.160 "subsystem": "bdev", 00:26:42.160 "config": [ 00:26:42.160 { 00:26:42.160 "params": { 00:26:42.160 "trtype": "pcie", 00:26:42.160 "traddr": "0000:00:06.0", 00:26:42.160 "name": "Nvme0" 00:26:42.160 }, 00:26:42.160 "method": "bdev_nvme_attach_controller" 00:26:42.160 }, 00:26:42.160 { 00:26:42.160 "method": "bdev_wait_for_examine" 00:26:42.160 } 00:26:42.160 ] 00:26:42.160 } 00:26:42.160 ] 00:26:42.160 } 00:26:42.419 [2024-06-11 13:12:01.154948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.678 [2024-06-11 13:12:01.336782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.871  Copying: 60/60 [kB] (average 19 MBps) 00:26:43.871 00:26:43.871 13:12:02 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:43.871 13:12:02 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:43.871 13:12:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:43.871 13:12:02 -- dd/common.sh@11 -- # local nvme_ref= 00:26:43.871 13:12:02 -- dd/common.sh@12 -- # local size=61440 00:26:43.871 13:12:02 -- dd/common.sh@14 -- # local bs=1048576 00:26:43.871 13:12:02 -- dd/common.sh@15 -- # local count=1 00:26:43.871 13:12:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:43.871 13:12:02 -- dd/common.sh@18 -- # gen_conf 00:26:43.871 13:12:02 -- dd/common.sh@31 -- # xtrace_disable 00:26:43.871 13:12:02 -- common/autotest_common.sh@10 -- # set +x 00:26:43.871 [2024-06-11 13:12:02.702210] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:43.871 [2024-06-11 13:12:02.702596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137498 ] 00:26:43.871 { 00:26:43.871 "subsystems": [ 00:26:43.871 { 00:26:43.871 "subsystem": "bdev", 00:26:43.871 "config": [ 00:26:43.871 { 00:26:43.871 "params": { 00:26:43.871 "trtype": "pcie", 00:26:43.871 "traddr": "0000:00:06.0", 00:26:43.871 "name": "Nvme0" 00:26:43.871 }, 00:26:43.871 "method": "bdev_nvme_attach_controller" 00:26:43.871 }, 00:26:43.871 { 00:26:43.871 "method": "bdev_wait_for_examine" 00:26:43.871 } 00:26:43.871 ] 00:26:43.871 } 00:26:43.871 ] 00:26:43.871 } 00:26:44.129 [2024-06-11 13:12:02.868782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.387 [2024-06-11 13:12:03.036739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.582  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:45.582 00:26:45.582 13:12:04 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:45.582 13:12:04 -- dd/basic_rw.sh@23 -- # count=15 00:26:45.582 13:12:04 -- dd/basic_rw.sh@24 -- # count=15 00:26:45.582 13:12:04 -- dd/basic_rw.sh@25 -- # size=61440 00:26:45.582 13:12:04 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:45.582 13:12:04 -- dd/common.sh@98 -- # xtrace_disable 00:26:45.582 13:12:04 -- common/autotest_common.sh@10 -- # set +x 00:26:46.149 13:12:04 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:26:46.149 13:12:04 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:46.149 13:12:04 -- dd/common.sh@31 -- # xtrace_disable 00:26:46.149 13:12:04 -- common/autotest_common.sh@10 -- # set +x 00:26:46.149 { 00:26:46.149 "subsystems": [ 00:26:46.149 { 00:26:46.149 "subsystem": "bdev", 00:26:46.149 "config": [ 00:26:46.149 { 00:26:46.149 "params": { 00:26:46.149 "trtype": "pcie", 00:26:46.149 "traddr": "0000:00:06.0", 00:26:46.149 "name": "Nvme0" 00:26:46.149 }, 00:26:46.149 "method": "bdev_nvme_attach_controller" 00:26:46.149 }, 00:26:46.149 { 00:26:46.149 "method": "bdev_wait_for_examine" 00:26:46.149 } 00:26:46.149 ] 00:26:46.149 } 00:26:46.149 ] 00:26:46.149 } 00:26:46.149 [2024-06-11 13:12:04.831806] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:46.149 [2024-06-11 13:12:04.832139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137548 ] 00:26:46.408 [2024-06-11 13:12:04.994766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.408 [2024-06-11 13:12:05.176234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.636  Copying: 60/60 [kB] (average 58 MBps) 00:26:47.636 00:26:47.637 13:12:06 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:26:47.637 13:12:06 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:47.895 13:12:06 -- dd/common.sh@31 -- # xtrace_disable 00:26:47.895 13:12:06 -- common/autotest_common.sh@10 -- # set +x 00:26:47.895 [2024-06-11 13:12:06.536799] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:47.895 [2024-06-11 13:12:06.537158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137579 ] 00:26:47.895 { 00:26:47.895 "subsystems": [ 00:26:47.895 { 00:26:47.895 "subsystem": "bdev", 00:26:47.895 "config": [ 00:26:47.895 { 00:26:47.895 "params": { 00:26:47.895 "trtype": "pcie", 00:26:47.895 "traddr": "0000:00:06.0", 00:26:47.895 "name": "Nvme0" 00:26:47.895 }, 00:26:47.895 "method": "bdev_nvme_attach_controller" 00:26:47.895 }, 00:26:47.895 { 00:26:47.895 "method": "bdev_wait_for_examine" 00:26:47.895 } 00:26:47.895 ] 00:26:47.895 } 00:26:47.895 ] 00:26:47.895 } 00:26:47.895 [2024-06-11 13:12:06.702956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.154 [2024-06-11 13:12:06.868141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.350  Copying: 60/60 [kB] (average 58 MBps) 00:26:49.350 00:26:49.350 13:12:08 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:49.350 13:12:08 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:49.350 13:12:08 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:49.350 13:12:08 -- dd/common.sh@11 -- # local nvme_ref= 00:26:49.350 13:12:08 -- dd/common.sh@12 -- # local size=61440 00:26:49.350 13:12:08 -- dd/common.sh@14 -- # local bs=1048576 00:26:49.350 13:12:08 -- dd/common.sh@15 -- # local count=1 00:26:49.350 13:12:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:49.350 13:12:08 -- dd/common.sh@18 -- # gen_conf 00:26:49.350 13:12:08 -- dd/common.sh@31 -- # xtrace_disable 00:26:49.350 13:12:08 -- common/autotest_common.sh@10 -- # set +x 00:26:49.350 [2024-06-11 13:12:08.162342] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:49.350 [2024-06-11 13:12:08.162735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137601 ] 00:26:49.350 { 00:26:49.350 "subsystems": [ 00:26:49.350 { 00:26:49.350 "subsystem": "bdev", 00:26:49.351 "config": [ 00:26:49.351 { 00:26:49.351 "params": { 00:26:49.351 "trtype": "pcie", 00:26:49.351 "traddr": "0000:00:06.0", 00:26:49.351 "name": "Nvme0" 00:26:49.351 }, 00:26:49.351 "method": "bdev_nvme_attach_controller" 00:26:49.351 }, 00:26:49.351 { 00:26:49.351 "method": "bdev_wait_for_examine" 00:26:49.351 } 00:26:49.351 ] 00:26:49.351 } 00:26:49.351 ] 00:26:49.351 } 00:26:49.610 [2024-06-11 13:12:08.327149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.869 [2024-06-11 13:12:08.487186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.065  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:51.065 00:26:51.065 13:12:09 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:51.065 13:12:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:51.065 13:12:09 -- dd/basic_rw.sh@23 -- # count=7 00:26:51.065 13:12:09 -- dd/basic_rw.sh@24 -- # count=7 00:26:51.065 13:12:09 -- dd/basic_rw.sh@25 -- # size=57344 00:26:51.065 13:12:09 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:51.065 13:12:09 -- dd/common.sh@98 -- # xtrace_disable 00:26:51.065 13:12:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.633 13:12:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:26:51.633 13:12:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:51.633 13:12:10 -- dd/common.sh@31 -- # xtrace_disable 00:26:51.633 13:12:10 -- common/autotest_common.sh@10 -- # set +x 00:26:51.633 [2024-06-11 13:12:10.312864] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:51.633 [2024-06-11 13:12:10.313264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137632 ] 00:26:51.633 { 00:26:51.633 "subsystems": [ 00:26:51.633 { 00:26:51.633 "subsystem": "bdev", 00:26:51.633 "config": [ 00:26:51.633 { 00:26:51.633 "params": { 00:26:51.633 "trtype": "pcie", 00:26:51.633 "traddr": "0000:00:06.0", 00:26:51.633 "name": "Nvme0" 00:26:51.633 }, 00:26:51.633 "method": "bdev_nvme_attach_controller" 00:26:51.633 }, 00:26:51.633 { 00:26:51.633 "method": "bdev_wait_for_examine" 00:26:51.633 } 00:26:51.633 ] 00:26:51.633 } 00:26:51.633 ] 00:26:51.633 } 00:26:51.893 [2024-06-11 13:12:10.479218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.893 [2024-06-11 13:12:10.641664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.088  Copying: 56/56 [kB] (average 54 MBps) 00:26:53.088 00:26:53.088 13:12:11 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:26:53.088 13:12:11 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:53.088 13:12:11 -- dd/common.sh@31 -- # xtrace_disable 00:26:53.088 13:12:11 -- common/autotest_common.sh@10 -- # set +x 00:26:53.088 [2024-06-11 13:12:11.923498] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:53.088 [2024-06-11 13:12:11.923903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137660 ] 00:26:53.088 { 00:26:53.088 "subsystems": [ 00:26:53.088 { 00:26:53.088 "subsystem": "bdev", 00:26:53.088 "config": [ 00:26:53.088 { 00:26:53.088 "params": { 00:26:53.088 "trtype": "pcie", 00:26:53.088 "traddr": "0000:00:06.0", 00:26:53.088 "name": "Nvme0" 00:26:53.088 }, 00:26:53.088 "method": "bdev_nvme_attach_controller" 00:26:53.088 }, 00:26:53.088 { 00:26:53.088 "method": "bdev_wait_for_examine" 00:26:53.088 } 00:26:53.088 ] 00:26:53.088 } 00:26:53.088 ] 00:26:53.088 } 00:26:53.347 [2024-06-11 13:12:12.088088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.607 [2024-06-11 13:12:12.247686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.801  Copying: 56/56 [kB] (average 27 MBps) 00:26:54.801 00:26:54.801 13:12:13 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:54.801 13:12:13 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:54.801 13:12:13 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:54.801 13:12:13 -- dd/common.sh@11 -- # local nvme_ref= 00:26:54.801 13:12:13 -- dd/common.sh@12 -- # local size=57344 00:26:54.801 13:12:13 -- dd/common.sh@14 -- # local bs=1048576 00:26:54.801 13:12:13 -- dd/common.sh@15 -- # local count=1 00:26:54.801 13:12:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:54.801 13:12:13 -- dd/common.sh@18 -- # gen_conf 00:26:54.801 13:12:13 -- dd/common.sh@31 -- # xtrace_disable 00:26:54.801 13:12:13 -- common/autotest_common.sh@10 -- # set +x 00:26:54.801 [2024-06-11 13:12:13.630795] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:54.801 [2024-06-11 13:12:13.631176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137688 ] 00:26:54.801 { 00:26:54.801 "subsystems": [ 00:26:54.801 { 00:26:54.801 "subsystem": "bdev", 00:26:54.801 "config": [ 00:26:54.801 { 00:26:54.801 "params": { 00:26:54.801 "trtype": "pcie", 00:26:54.801 "traddr": "0000:00:06.0", 00:26:54.801 "name": "Nvme0" 00:26:54.801 }, 00:26:54.801 "method": "bdev_nvme_attach_controller" 00:26:54.801 }, 00:26:54.801 { 00:26:54.801 "method": "bdev_wait_for_examine" 00:26:54.801 } 00:26:54.801 ] 00:26:54.801 } 00:26:54.801 ] 00:26:54.801 } 00:26:55.060 [2024-06-11 13:12:13.796182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.318 [2024-06-11 13:12:13.968715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.514  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:56.514 00:26:56.514 13:12:15 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:56.514 13:12:15 -- dd/basic_rw.sh@23 -- # count=7 00:26:56.514 13:12:15 -- dd/basic_rw.sh@24 -- # count=7 00:26:56.514 13:12:15 -- dd/basic_rw.sh@25 -- # size=57344 00:26:56.514 13:12:15 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:56.514 13:12:15 -- dd/common.sh@98 -- # xtrace_disable 00:26:56.514 13:12:15 -- common/autotest_common.sh@10 -- # set +x 00:26:57.082 13:12:15 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:26:57.082 13:12:15 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:57.082 13:12:15 -- dd/common.sh@31 -- # xtrace_disable 00:26:57.082 13:12:15 -- common/autotest_common.sh@10 -- # set +x 00:26:57.082 { 00:26:57.082 "subsystems": [ 00:26:57.082 { 00:26:57.082 "subsystem": "bdev", 00:26:57.082 "config": [ 00:26:57.082 { 00:26:57.082 "params": { 00:26:57.082 "trtype": "pcie", 00:26:57.082 "traddr": "0000:00:06.0", 00:26:57.082 "name": "Nvme0" 00:26:57.082 }, 00:26:57.082 "method": "bdev_nvme_attach_controller" 00:26:57.082 }, 00:26:57.082 { 00:26:57.082 "method": "bdev_wait_for_examine" 00:26:57.082 } 00:26:57.082 ] 00:26:57.082 } 00:26:57.082 ] 00:26:57.082 } 00:26:57.082 [2024-06-11 13:12:15.717391] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:57.082 [2024-06-11 13:12:15.717770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137743 ] 00:26:57.082 [2024-06-11 13:12:15.880205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.340 [2024-06-11 13:12:16.040872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.535  Copying: 56/56 [kB] (average 54 MBps) 00:26:58.535 00:26:58.535 13:12:17 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:26:58.535 13:12:17 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:58.535 13:12:17 -- dd/common.sh@31 -- # xtrace_disable 00:26:58.535 13:12:17 -- common/autotest_common.sh@10 -- # set +x 00:26:58.793 [2024-06-11 13:12:17.420718] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:58.793 [2024-06-11 13:12:17.421134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137763 ] 00:26:58.793 { 00:26:58.793 "subsystems": [ 00:26:58.793 { 00:26:58.793 "subsystem": "bdev", 00:26:58.793 "config": [ 00:26:58.793 { 00:26:58.793 "params": { 00:26:58.793 "trtype": "pcie", 00:26:58.793 "traddr": "0000:00:06.0", 00:26:58.793 "name": "Nvme0" 00:26:58.793 }, 00:26:58.793 "method": "bdev_nvme_attach_controller" 00:26:58.793 }, 00:26:58.793 { 00:26:58.793 "method": "bdev_wait_for_examine" 00:26:58.793 } 00:26:58.793 ] 00:26:58.793 } 00:26:58.793 ] 00:26:58.793 } 00:26:58.793 [2024-06-11 13:12:17.587285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.052 [2024-06-11 13:12:17.775561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.250  Copying: 56/56 [kB] (average 54 MBps) 00:27:00.250 00:27:00.250 13:12:19 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:00.250 13:12:19 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:00.250 13:12:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:00.250 13:12:19 -- dd/common.sh@11 -- # local nvme_ref= 00:27:00.250 13:12:19 -- dd/common.sh@12 -- # local size=57344 00:27:00.250 13:12:19 -- dd/common.sh@14 -- # local bs=1048576 00:27:00.250 13:12:19 -- dd/common.sh@15 -- # local count=1 00:27:00.250 13:12:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:00.250 13:12:19 -- dd/common.sh@18 -- # gen_conf 00:27:00.250 13:12:19 -- dd/common.sh@31 -- # xtrace_disable 00:27:00.250 13:12:19 -- common/autotest_common.sh@10 -- # set +x 00:27:00.250 { 00:27:00.250 "subsystems": [ 00:27:00.250 { 00:27:00.250 "subsystem": "bdev", 00:27:00.250 "config": [ 00:27:00.250 { 00:27:00.250 "params": { 00:27:00.250 "trtype": "pcie", 00:27:00.250 "traddr": "0000:00:06.0", 00:27:00.250 "name": "Nvme0" 00:27:00.250 }, 00:27:00.250 "method": "bdev_nvme_attach_controller" 00:27:00.251 }, 00:27:00.251 { 00:27:00.251 "method": "bdev_wait_for_examine" 00:27:00.251 } 00:27:00.251 ] 00:27:00.251 } 00:27:00.251 ] 00:27:00.251 } 00:27:00.251 [2024-06-11 13:12:19.068264] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:00.251 [2024-06-11 13:12:19.068596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137795 ] 00:27:00.508 [2024-06-11 13:12:19.235432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.767 [2024-06-11 13:12:19.403450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.962  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:01.962 00:27:01.962 13:12:20 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:01.962 13:12:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:01.962 13:12:20 -- dd/basic_rw.sh@23 -- # count=3 00:27:01.962 13:12:20 -- dd/basic_rw.sh@24 -- # count=3 00:27:01.962 13:12:20 -- dd/basic_rw.sh@25 -- # size=49152 00:27:01.962 13:12:20 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:01.962 13:12:20 -- dd/common.sh@98 -- # xtrace_disable 00:27:01.962 13:12:20 -- common/autotest_common.sh@10 -- # set +x 00:27:02.528 13:12:21 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:27:02.528 13:12:21 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:02.528 13:12:21 -- dd/common.sh@31 -- # xtrace_disable 00:27:02.528 13:12:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.528 [2024-06-11 13:12:21.168128] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:02.528 [2024-06-11 13:12:21.168582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137823 ] 00:27:02.528 { 00:27:02.528 "subsystems": [ 00:27:02.528 { 00:27:02.528 "subsystem": "bdev", 00:27:02.528 "config": [ 00:27:02.528 { 00:27:02.528 "params": { 00:27:02.528 "trtype": "pcie", 00:27:02.528 "traddr": "0000:00:06.0", 00:27:02.528 "name": "Nvme0" 00:27:02.528 }, 00:27:02.528 "method": "bdev_nvme_attach_controller" 00:27:02.528 }, 00:27:02.528 { 00:27:02.528 "method": "bdev_wait_for_examine" 00:27:02.528 } 00:27:02.528 ] 00:27:02.528 } 00:27:02.528 ] 00:27:02.528 } 00:27:02.528 [2024-06-11 13:12:21.334457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.786 [2024-06-11 13:12:21.506390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.980  Copying: 48/48 [kB] (average 46 MBps) 00:27:03.980 00:27:03.980 13:12:22 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:27:03.980 13:12:22 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:03.980 13:12:22 -- dd/common.sh@31 -- # xtrace_disable 00:27:03.980 13:12:22 -- common/autotest_common.sh@10 -- # set +x 00:27:03.980 [2024-06-11 13:12:22.790082] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:03.980 [2024-06-11 13:12:22.790412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137854 ] 00:27:03.980 { 00:27:03.980 "subsystems": [ 00:27:03.980 { 00:27:03.980 "subsystem": "bdev", 00:27:03.980 "config": [ 00:27:03.980 { 00:27:03.980 "params": { 00:27:03.980 "trtype": "pcie", 00:27:03.980 "traddr": "0000:00:06.0", 00:27:03.980 "name": "Nvme0" 00:27:03.980 }, 00:27:03.980 "method": "bdev_nvme_attach_controller" 00:27:03.980 }, 00:27:03.980 { 00:27:03.980 "method": "bdev_wait_for_examine" 00:27:03.980 } 00:27:03.980 ] 00:27:03.980 } 00:27:03.980 ] 00:27:03.980 } 00:27:04.239 [2024-06-11 13:12:22.958912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.497 [2024-06-11 13:12:23.116646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.692  Copying: 48/48 [kB] (average 46 MBps) 00:27:05.692 00:27:05.692 13:12:24 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:05.692 13:12:24 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:05.692 13:12:24 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:05.692 13:12:24 -- dd/common.sh@11 -- # local nvme_ref= 00:27:05.692 13:12:24 -- dd/common.sh@12 -- # local size=49152 00:27:05.692 13:12:24 -- dd/common.sh@14 -- # local bs=1048576 00:27:05.692 13:12:24 -- dd/common.sh@15 -- # local count=1 00:27:05.692 13:12:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:05.692 13:12:24 -- dd/common.sh@18 -- # gen_conf 00:27:05.692 13:12:24 -- dd/common.sh@31 -- # xtrace_disable 00:27:05.692 13:12:24 -- common/autotest_common.sh@10 -- # set +x 00:27:05.692 [2024-06-11 13:12:24.502725] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:05.692 [2024-06-11 13:12:24.503131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137876 ] 00:27:05.692 { 00:27:05.692 "subsystems": [ 00:27:05.692 { 00:27:05.692 "subsystem": "bdev", 00:27:05.692 "config": [ 00:27:05.692 { 00:27:05.692 "params": { 00:27:05.692 "trtype": "pcie", 00:27:05.692 "traddr": "0000:00:06.0", 00:27:05.692 "name": "Nvme0" 00:27:05.692 }, 00:27:05.692 "method": "bdev_nvme_attach_controller" 00:27:05.692 }, 00:27:05.692 { 00:27:05.692 "method": "bdev_wait_for_examine" 00:27:05.692 } 00:27:05.692 ] 00:27:05.692 } 00:27:05.692 ] 00:27:05.692 } 00:27:05.951 [2024-06-11 13:12:24.670983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.211 [2024-06-11 13:12:24.842631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.407  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:07.408 00:27:07.408 13:12:26 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:07.408 13:12:26 -- dd/basic_rw.sh@23 -- # count=3 00:27:07.408 13:12:26 -- dd/basic_rw.sh@24 -- # count=3 00:27:07.408 13:12:26 -- dd/basic_rw.sh@25 -- # size=49152 00:27:07.408 13:12:26 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:07.408 13:12:26 -- dd/common.sh@98 -- # xtrace_disable 00:27:07.408 13:12:26 -- common/autotest_common.sh@10 -- # set +x 00:27:07.666 13:12:26 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:27:07.666 13:12:26 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:07.666 13:12:26 -- dd/common.sh@31 -- # xtrace_disable 00:27:07.666 13:12:26 -- common/autotest_common.sh@10 -- # set +x 00:27:07.925 [2024-06-11 13:12:26.516001] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:07.925 [2024-06-11 13:12:26.516339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137929 ] 00:27:07.925 { 00:27:07.925 "subsystems": [ 00:27:07.925 { 00:27:07.925 "subsystem": "bdev", 00:27:07.925 "config": [ 00:27:07.925 { 00:27:07.925 "params": { 00:27:07.925 "trtype": "pcie", 00:27:07.925 "traddr": "0000:00:06.0", 00:27:07.925 "name": "Nvme0" 00:27:07.925 }, 00:27:07.925 "method": "bdev_nvme_attach_controller" 00:27:07.925 }, 00:27:07.925 { 00:27:07.925 "method": "bdev_wait_for_examine" 00:27:07.925 } 00:27:07.925 ] 00:27:07.925 } 00:27:07.925 ] 00:27:07.925 } 00:27:07.925 [2024-06-11 13:12:26.670286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.184 [2024-06-11 13:12:26.854958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.380  Copying: 48/48 [kB] (average 46 MBps) 00:27:09.380 00:27:09.380 13:12:28 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:27:09.380 13:12:28 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:09.380 13:12:28 -- dd/common.sh@31 -- # xtrace_disable 00:27:09.380 13:12:28 -- common/autotest_common.sh@10 -- # set +x 00:27:09.380 { 00:27:09.380 "subsystems": [ 00:27:09.380 { 00:27:09.380 "subsystem": "bdev", 00:27:09.380 "config": [ 00:27:09.380 { 00:27:09.380 "params": { 00:27:09.380 "trtype": "pcie", 00:27:09.380 "traddr": "0000:00:06.0", 00:27:09.380 "name": "Nvme0" 00:27:09.380 }, 00:27:09.380 "method": "bdev_nvme_attach_controller" 00:27:09.380 }, 00:27:09.380 { 00:27:09.380 "method": "bdev_wait_for_examine" 00:27:09.380 } 00:27:09.380 ] 00:27:09.380 } 00:27:09.380 ] 00:27:09.380 } 00:27:09.380 [2024-06-11 13:12:28.211523] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:09.380 [2024-06-11 13:12:28.211870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137957 ] 00:27:09.639 [2024-06-11 13:12:28.395824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.898 [2024-06-11 13:12:28.584984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.093  Copying: 48/48 [kB] (average 46 MBps) 00:27:11.093 00:27:11.093 13:12:29 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:11.093 13:12:29 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:11.093 13:12:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:11.093 13:12:29 -- dd/common.sh@11 -- # local nvme_ref= 00:27:11.093 13:12:29 -- dd/common.sh@12 -- # local size=49152 00:27:11.093 13:12:29 -- dd/common.sh@14 -- # local bs=1048576 00:27:11.093 13:12:29 -- dd/common.sh@15 -- # local count=1 00:27:11.093 13:12:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:11.093 13:12:29 -- dd/common.sh@18 -- # gen_conf 00:27:11.093 13:12:29 -- dd/common.sh@31 -- # xtrace_disable 00:27:11.093 13:12:29 -- common/autotest_common.sh@10 -- # set +x 00:27:11.093 [2024-06-11 13:12:29.879144] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:11.093 [2024-06-11 13:12:29.879519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137985 ] 00:27:11.093 { 00:27:11.093 "subsystems": [ 00:27:11.093 { 00:27:11.093 "subsystem": "bdev", 00:27:11.093 "config": [ 00:27:11.093 { 00:27:11.093 "params": { 00:27:11.093 "trtype": "pcie", 00:27:11.093 "traddr": "0000:00:06.0", 00:27:11.093 "name": "Nvme0" 00:27:11.093 }, 00:27:11.093 "method": "bdev_nvme_attach_controller" 00:27:11.093 }, 00:27:11.093 { 00:27:11.093 "method": "bdev_wait_for_examine" 00:27:11.093 } 00:27:11.093 ] 00:27:11.093 } 00:27:11.093 ] 00:27:11.093 } 00:27:11.351 [2024-06-11 13:12:30.048138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.610 [2024-06-11 13:12:30.232978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.831  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:12.831 00:27:12.831 ************************************ 00:27:12.831 END TEST dd_rw 00:27:12.831 ************************************ 00:27:12.831 00:27:12.831 real 0m32.780s 00:27:12.831 user 0m27.267s 00:27:12.831 sys 0m4.277s 00:27:12.831 13:12:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.831 13:12:31 -- common/autotest_common.sh@10 -- # set +x 00:27:12.831 13:12:31 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:27:12.831 13:12:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:12.831 13:12:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.831 13:12:31 -- common/autotest_common.sh@10 -- # set +x 00:27:12.831 ************************************ 00:27:12.831 START TEST dd_rw_offset 00:27:12.831 ************************************ 00:27:12.831 13:12:31 -- common/autotest_common.sh@1104 -- # basic_offset 00:27:12.831 13:12:31 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:27:12.831 13:12:31 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:27:12.831 13:12:31 -- dd/common.sh@98 -- # xtrace_disable 00:27:12.831 13:12:31 -- common/autotest_common.sh@10 -- # set +x 00:27:12.831 13:12:31 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:27:12.831 13:12:31 -- dd/basic_rw.sh@56 -- # data=kvrif39n17unqopcn9az1y2jpaabiewncmobme29y2d8d26o8zm5s8qd6gc6imw5ifue93t5uyimipy298eq08reiwicvviufsxmgvryombmkoyylypvxu8kisiev43d8871rscsw78xaubnjjcrb4fu07wi6rtow4roqxhh03y31w6gkpv3c6kgypi1viot768g92sjt39akvsuaecpvak45jbay3nsnlmjbfnh3aiglrf5x8xvwhmp0zsypac3ncmuoezhnxu85wdjc44vtjzvmwlllxvnb4vn1ygpwkznmw54khhhyw9j4nfocy9ym1pwt45r8w8crzl783rxsfug33vjlunokprgn9zlxw074oco36bxy9ermldis9guwtzj3nlq8a698j224n98ddy3vc9ukn3zb75k8ohe07x2r4qy36fd2ikt867z2ookkrrr5la27uyc5r9aqmp48mrfwwc2sgfv2glm288a67vh821kcdvnb60ggilq3q3zl7e1zfkjiipjbuxl0wqyxz83wdnin0eeprdwz1rfd7pezunab4zzp190i53deo0gb67ag7x7yej82nueytwo8xa2qnnrp0jtujmsofgrfatn11gtx4zsqpduxmhj6sficori2i1h6zgtpjqk80nedj2qjnkeennf6dlywr8vtb8450o2kustec2kvm1wmpbfqdsp7plw0ze4magj35s9dyywdgufs783ldahoz5nyko57p5bxepwa8w4lk0whyogunvfpvw16eiygiy761vu2cn5n3blzhxjz5v3ot2y56g6y6b01vc1zbif61doa106xuf8ejnh9j29ct5n83nie0mqvb73v9genk1apvsxvnsxqmtckclpxv3hfa3zgzf52j6sc1tr41tuelw1ut4439jc0wmnyxmgsadwadmwwn5xzgpy83fetsih3fpbgtk6p8im9s0n8jlduozx4pfkc5yb3g8e4xcwmtmq7lxwii6mj242vwpi2ymspnutqd2cd9y3b1667yp70xwlzzmwh1wnqm0d1xc7dkiocv8k15nw9tqy6v42vktypvck1huwf3pequncf995a5topwlb84mwgo9v00jzvstvdr4h80clq0da07jvkqneqgi1v8qhqbi6rs242mh5qgrpjc1308quqep5j4yvb9d7n4mllr4wvl4vfoek08hwlp9bpxcl95pn3qxxm01338kzyliytxcppatr11o19q555xj699qlfob9oslxl7nt934wiza05nj5nmbe5dtpxu4p0ofliev3de3h2667oy7wkcdwcetl3j97cwi9p5l51cdc9ohkqyi4ynguwiklwhpd7r0llb94vzcbz5553xqfs8t2267xs9xk3jxs61xiaev4pr57vbrmv6vosk8ecvib45br4oyr783f8wupo9e94v0bmfvpzji83s32k3z8hparbep4zxbtcumcawtvfh7tt9rxzeociw7zn5h2nahqe2wpcip1mkoc7zx8g14m4vy533zfsysiipylq355jhdvy1p6qwl3d3zxm5r23m0h7dg4cv6u2zt8f4po7j1qta9zx7c2qovx41q5u07ryw2wrd4n9g1md0340cqcfrm5zrmjoo0tv1iem24x7odrdttnqd4i7sabr47kyh4cxehd08d2iqqcur7a8ciy5n4lcb75849u30phfqbivkm38mktqlfkip9y7tg1l4cwdwxg13twqaivruswcrucluu8g6v0xwqkz72j7tyuwbaelwcirxofxryrbf7s58e1x0aj2l7myqqqjwd5lg69w6cq6guhz31o1t4okrov2ks0crhrqh38u83c79i95juz3y9dectcny6febszh76kmw7ek7hhuxv2366givg3hwlum3uqcogxnsvnylp2r7g0zqv89mae1afaomk302dc7o1l8mmnrqgfxh2nxepvixda7w0hzxxs1pjo2dbp0k6ug0su7k2zo1h23cfgin3goxe3h15s9xgoo9aybobo5lj0ld1ro3lfky87iddljfn4luq16jmfifr64klzwfl1bumivk5bhog743ubkiz796zwa0i66w62evciwsqfjpv0lqqx4oy9n2cd4fu9l59jdrz3hsy8fbjbpac9pv5roydlmivi02r79rmdem7um7j15h1qrz16cxa0b8feife6q31i1s8az41d01yn2jusk1j6u99krg56hlw7vj6zs8p76my81eubwbgica33hdm41oe74ogjkp1zm6dpkgjxxdlru51yc3nfgfmr42y55r9xxhzlz8kg6u8z026ibxv8yfh2w3vmebtocz1am6am0ub8nrqcl6tg46ju0xicalfe7sfs7q54jmwfnxr8b1fmuf5x9457a8gynevmdacfwdqzr4hyq26xp1be79heh57lxbn4yxabyqlb7r27wvrwvb4x9s25q30ny119wazsq81jytb8xbmhx90mtseagoe02x2r2tnqgfc3xrf80715b2nclkqa3frg7o6lko234k729tku05jyxo36ijbqtrxt9ibo8fkylmgpumdr1dp79thojk4hfcz3697pk26mfin2v126y9dgmkv1kbxt8p3lgqwdyo8mdzj2956vh47lh3gfenyppx9329vzq2e19r29efuei8l4usszaaycfdtm99jbn94t0q0q2bf1x1urncbhft7gnv5ml1ocv3t3jy70ta11c1h32wssca5ieqbm4yw0jmx6tl1dshjk1aa1ikp1wstzn5ov8vf4hga9dh1xhks5r5c9uafbf6ja4l93c6hstgh0alx17uguai1wrrioejo87am9853p913448inmcxwlq9nsammnfajdast11rxrpvwnq5upe3ium159za51vydjnvgagv85id41u45ml0bxfykclrusrs576yqhkdnrzeuzweza8vdxhk0nwv3jwd6l711garzehhsa2enyrfjgprbd3vw1v4lgayemgk49nbci4065b2xvbin8f7g2pzx6jk9lel9hby2i0hjqko9y0w3bxyusm7qicbryk7t786kw95djxp1goutffqvurjjlmscyntj86ry80kf2yqogoro19nlxjt4ghf7r6peq3qvqw5irpwsp3bmdiknxu5gi7guhbph6yodf147sc3e6nzfh88glcadhybyjps8ybll3aj08shbuj8ki5st8rrmp3eyp8936q8f1go9plzz7q4zmn12qrt8txtgxgw80qs6wn3nmgx8a3wjun2a8n0v1iafar3m13qy2jhgjknwdjrezdxghuxzqjqxj6o153awaesrxmpabrte73bkappbhzd2ip6qyxvju8cai5vq6ozubsbnqlofk8xny2k49n9qnswnwqiafv5t15b04tcxs64jogcsfknxzk2tdnrf73sevfrgm2xud67nz1j9mx0ooz6rqdvpq2yl079s8oqyjmb0v52n043vpr9dqg9w9ncx207bw8eotvdlxdxsxadqobvk09mk79znzg7zoble5rprlmxt3vp30pcspv9mow474937dhxhpyct32radzz7bnbxmgitee18ecp4r6eyd91v0k897fvq0bzjg9yf5k67zldec36pqyq7ob3xwgiuhcrlvamxuii9co4o6k2hgnlz4llkp57qw8yy4jwkg97535stxpqsjlpm6jbj29adev97drcfdo0wubw99vb22k3ylc2ltgtdwrbk62omq54me2c7mlazo9rkerkbbkpc80lqn0oh9hw0ei1sxtyxfvqsmd0yh98tb8dy216pp8herjqifsitp2rbre95i3bo9enpzmjtg31rasq3h7hv1ppzwy1ye557cywt09f5e3tqlhxxocw0svv5rdgn1ucck1uzhur9a8dxc1jfx3u87aeq81z7q8bnwa8o1mw75183z1veamjmk4wgyr4n42ob2uox62rc6k5ixspt86890a5676rmjmc1zj1k15hodop7qeaoogm11ablzo9osmzyf4057z1i8f6q8wpuvv4x7fuo3jv2ghtwaqez51xlcr15 00:27:12.831 13:12:31 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:27:12.831 13:12:31 -- dd/basic_rw.sh@59 -- # gen_conf 00:27:12.831 13:12:31 -- dd/common.sh@31 -- # xtrace_disable 00:27:12.831 13:12:31 -- common/autotest_common.sh@10 -- # set +x 00:27:13.090 [2024-06-11 13:12:31.723153] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:13.090 [2024-06-11 13:12:31.723508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138030 ] 00:27:13.090 { 00:27:13.090 "subsystems": [ 00:27:13.090 { 00:27:13.090 "subsystem": "bdev", 00:27:13.090 "config": [ 00:27:13.090 { 00:27:13.090 "params": { 00:27:13.090 "trtype": "pcie", 00:27:13.090 "traddr": "0000:00:06.0", 00:27:13.090 "name": "Nvme0" 00:27:13.090 }, 00:27:13.090 "method": "bdev_nvme_attach_controller" 00:27:13.090 }, 00:27:13.090 { 00:27:13.090 "method": "bdev_wait_for_examine" 00:27:13.090 } 00:27:13.090 ] 00:27:13.090 } 00:27:13.090 ] 00:27:13.090 } 00:27:13.090 [2024-06-11 13:12:31.890760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.349 [2024-06-11 13:12:32.062638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.541  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:14.541 00:27:14.541 13:12:33 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:27:14.541 13:12:33 -- dd/basic_rw.sh@65 -- # gen_conf 00:27:14.541 13:12:33 -- dd/common.sh@31 -- # xtrace_disable 00:27:14.541 13:12:33 -- common/autotest_common.sh@10 -- # set +x 00:27:14.541 [2024-06-11 13:12:33.374038] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:14.541 [2024-06-11 13:12:33.374441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138061 ] 00:27:14.800 { 00:27:14.800 "subsystems": [ 00:27:14.800 { 00:27:14.800 "subsystem": "bdev", 00:27:14.800 "config": [ 00:27:14.800 { 00:27:14.800 "params": { 00:27:14.800 "trtype": "pcie", 00:27:14.800 "traddr": "0000:00:06.0", 00:27:14.800 "name": "Nvme0" 00:27:14.800 }, 00:27:14.800 "method": "bdev_nvme_attach_controller" 00:27:14.800 }, 00:27:14.800 { 00:27:14.800 "method": "bdev_wait_for_examine" 00:27:14.800 } 00:27:14.800 ] 00:27:14.800 } 00:27:14.800 ] 00:27:14.800 } 00:27:14.800 [2024-06-11 13:12:33.527422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.058 [2024-06-11 13:12:33.689513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.252  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:16.252 00:27:16.252 13:12:34 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:27:16.253 13:12:34 -- dd/basic_rw.sh@72 -- # [[ kvrif39n17unqopcn9az1y2jpaabiewncmobme29y2d8d26o8zm5s8qd6gc6imw5ifue93t5uyimipy298eq08reiwicvviufsxmgvryombmkoyylypvxu8kisiev43d8871rscsw78xaubnjjcrb4fu07wi6rtow4roqxhh03y31w6gkpv3c6kgypi1viot768g92sjt39akvsuaecpvak45jbay3nsnlmjbfnh3aiglrf5x8xvwhmp0zsypac3ncmuoezhnxu85wdjc44vtjzvmwlllxvnb4vn1ygpwkznmw54khhhyw9j4nfocy9ym1pwt45r8w8crzl783rxsfug33vjlunokprgn9zlxw074oco36bxy9ermldis9guwtzj3nlq8a698j224n98ddy3vc9ukn3zb75k8ohe07x2r4qy36fd2ikt867z2ookkrrr5la27uyc5r9aqmp48mrfwwc2sgfv2glm288a67vh821kcdvnb60ggilq3q3zl7e1zfkjiipjbuxl0wqyxz83wdnin0eeprdwz1rfd7pezunab4zzp190i53deo0gb67ag7x7yej82nueytwo8xa2qnnrp0jtujmsofgrfatn11gtx4zsqpduxmhj6sficori2i1h6zgtpjqk80nedj2qjnkeennf6dlywr8vtb8450o2kustec2kvm1wmpbfqdsp7plw0ze4magj35s9dyywdgufs783ldahoz5nyko57p5bxepwa8w4lk0whyogunvfpvw16eiygiy761vu2cn5n3blzhxjz5v3ot2y56g6y6b01vc1zbif61doa106xuf8ejnh9j29ct5n83nie0mqvb73v9genk1apvsxvnsxqmtckclpxv3hfa3zgzf52j6sc1tr41tuelw1ut4439jc0wmnyxmgsadwadmwwn5xzgpy83fetsih3fpbgtk6p8im9s0n8jlduozx4pfkc5yb3g8e4xcwmtmq7lxwii6mj242vwpi2ymspnutqd2cd9y3b1667yp70xwlzzmwh1wnqm0d1xc7dkiocv8k15nw9tqy6v42vktypvck1huwf3pequncf995a5topwlb84mwgo9v00jzvstvdr4h80clq0da07jvkqneqgi1v8qhqbi6rs242mh5qgrpjc1308quqep5j4yvb9d7n4mllr4wvl4vfoek08hwlp9bpxcl95pn3qxxm01338kzyliytxcppatr11o19q555xj699qlfob9oslxl7nt934wiza05nj5nmbe5dtpxu4p0ofliev3de3h2667oy7wkcdwcetl3j97cwi9p5l51cdc9ohkqyi4ynguwiklwhpd7r0llb94vzcbz5553xqfs8t2267xs9xk3jxs61xiaev4pr57vbrmv6vosk8ecvib45br4oyr783f8wupo9e94v0bmfvpzji83s32k3z8hparbep4zxbtcumcawtvfh7tt9rxzeociw7zn5h2nahqe2wpcip1mkoc7zx8g14m4vy533zfsysiipylq355jhdvy1p6qwl3d3zxm5r23m0h7dg4cv6u2zt8f4po7j1qta9zx7c2qovx41q5u07ryw2wrd4n9g1md0340cqcfrm5zrmjoo0tv1iem24x7odrdttnqd4i7sabr47kyh4cxehd08d2iqqcur7a8ciy5n4lcb75849u30phfqbivkm38mktqlfkip9y7tg1l4cwdwxg13twqaivruswcrucluu8g6v0xwqkz72j7tyuwbaelwcirxofxryrbf7s58e1x0aj2l7myqqqjwd5lg69w6cq6guhz31o1t4okrov2ks0crhrqh38u83c79i95juz3y9dectcny6febszh76kmw7ek7hhuxv2366givg3hwlum3uqcogxnsvnylp2r7g0zqv89mae1afaomk302dc7o1l8mmnrqgfxh2nxepvixda7w0hzxxs1pjo2dbp0k6ug0su7k2zo1h23cfgin3goxe3h15s9xgoo9aybobo5lj0ld1ro3lfky87iddljfn4luq16jmfifr64klzwfl1bumivk5bhog743ubkiz796zwa0i66w62evciwsqfjpv0lqqx4oy9n2cd4fu9l59jdrz3hsy8fbjbpac9pv5roydlmivi02r79rmdem7um7j15h1qrz16cxa0b8feife6q31i1s8az41d01yn2jusk1j6u99krg56hlw7vj6zs8p76my81eubwbgica33hdm41oe74ogjkp1zm6dpkgjxxdlru51yc3nfgfmr42y55r9xxhzlz8kg6u8z026ibxv8yfh2w3vmebtocz1am6am0ub8nrqcl6tg46ju0xicalfe7sfs7q54jmwfnxr8b1fmuf5x9457a8gynevmdacfwdqzr4hyq26xp1be79heh57lxbn4yxabyqlb7r27wvrwvb4x9s25q30ny119wazsq81jytb8xbmhx90mtseagoe02x2r2tnqgfc3xrf80715b2nclkqa3frg7o6lko234k729tku05jyxo36ijbqtrxt9ibo8fkylmgpumdr1dp79thojk4hfcz3697pk26mfin2v126y9dgmkv1kbxt8p3lgqwdyo8mdzj2956vh47lh3gfenyppx9329vzq2e19r29efuei8l4usszaaycfdtm99jbn94t0q0q2bf1x1urncbhft7gnv5ml1ocv3t3jy70ta11c1h32wssca5ieqbm4yw0jmx6tl1dshjk1aa1ikp1wstzn5ov8vf4hga9dh1xhks5r5c9uafbf6ja4l93c6hstgh0alx17uguai1wrrioejo87am9853p913448inmcxwlq9nsammnfajdast11rxrpvwnq5upe3ium159za51vydjnvgagv85id41u45ml0bxfykclrusrs576yqhkdnrzeuzweza8vdxhk0nwv3jwd6l711garzehhsa2enyrfjgprbd3vw1v4lgayemgk49nbci4065b2xvbin8f7g2pzx6jk9lel9hby2i0hjqko9y0w3bxyusm7qicbryk7t786kw95djxp1goutffqvurjjlmscyntj86ry80kf2yqogoro19nlxjt4ghf7r6peq3qvqw5irpwsp3bmdiknxu5gi7guhbph6yodf147sc3e6nzfh88glcadhybyjps8ybll3aj08shbuj8ki5st8rrmp3eyp8936q8f1go9plzz7q4zmn12qrt8txtgxgw80qs6wn3nmgx8a3wjun2a8n0v1iafar3m13qy2jhgjknwdjrezdxghuxzqjqxj6o153awaesrxmpabrte73bkappbhzd2ip6qyxvju8cai5vq6ozubsbnqlofk8xny2k49n9qnswnwqiafv5t15b04tcxs64jogcsfknxzk2tdnrf73sevfrgm2xud67nz1j9mx0ooz6rqdvpq2yl079s8oqyjmb0v52n043vpr9dqg9w9ncx207bw8eotvdlxdxsxadqobvk09mk79znzg7zoble5rprlmxt3vp30pcspv9mow474937dhxhpyct32radzz7bnbxmgitee18ecp4r6eyd91v0k897fvq0bzjg9yf5k67zldec36pqyq7ob3xwgiuhcrlvamxuii9co4o6k2hgnlz4llkp57qw8yy4jwkg97535stxpqsjlpm6jbj29adev97drcfdo0wubw99vb22k3ylc2ltgtdwrbk62omq54me2c7mlazo9rkerkbbkpc80lqn0oh9hw0ei1sxtyxfvqsmd0yh98tb8dy216pp8herjqifsitp2rbre95i3bo9enpzmjtg31rasq3h7hv1ppzwy1ye557cywt09f5e3tqlhxxocw0svv5rdgn1ucck1uzhur9a8dxc1jfx3u87aeq81z7q8bnwa8o1mw75183z1veamjmk4wgyr4n42ob2uox62rc6k5ixspt86890a5676rmjmc1zj1k15hodop7qeaoogm11ablzo9osmzyf4057z1i8f6q8wpuvv4x7fuo3jv2ghtwaqez51xlcr15 == \k\v\r\i\f\3\9\n\1\7\u\n\q\o\p\c\n\9\a\z\1\y\2\j\p\a\a\b\i\e\w\n\c\m\o\b\m\e\2\9\y\2\d\8\d\2\6\o\8\z\m\5\s\8\q\d\6\g\c\6\i\m\w\5\i\f\u\e\9\3\t\5\u\y\i\m\i\p\y\2\9\8\e\q\0\8\r\e\i\w\i\c\v\v\i\u\f\s\x\m\g\v\r\y\o\m\b\m\k\o\y\y\l\y\p\v\x\u\8\k\i\s\i\e\v\4\3\d\8\8\7\1\r\s\c\s\w\7\8\x\a\u\b\n\j\j\c\r\b\4\f\u\0\7\w\i\6\r\t\o\w\4\r\o\q\x\h\h\0\3\y\3\1\w\6\g\k\p\v\3\c\6\k\g\y\p\i\1\v\i\o\t\7\6\8\g\9\2\s\j\t\3\9\a\k\v\s\u\a\e\c\p\v\a\k\4\5\j\b\a\y\3\n\s\n\l\m\j\b\f\n\h\3\a\i\g\l\r\f\5\x\8\x\v\w\h\m\p\0\z\s\y\p\a\c\3\n\c\m\u\o\e\z\h\n\x\u\8\5\w\d\j\c\4\4\v\t\j\z\v\m\w\l\l\l\x\v\n\b\4\v\n\1\y\g\p\w\k\z\n\m\w\5\4\k\h\h\h\y\w\9\j\4\n\f\o\c\y\9\y\m\1\p\w\t\4\5\r\8\w\8\c\r\z\l\7\8\3\r\x\s\f\u\g\3\3\v\j\l\u\n\o\k\p\r\g\n\9\z\l\x\w\0\7\4\o\c\o\3\6\b\x\y\9\e\r\m\l\d\i\s\9\g\u\w\t\z\j\3\n\l\q\8\a\6\9\8\j\2\2\4\n\9\8\d\d\y\3\v\c\9\u\k\n\3\z\b\7\5\k\8\o\h\e\0\7\x\2\r\4\q\y\3\6\f\d\2\i\k\t\8\6\7\z\2\o\o\k\k\r\r\r\5\l\a\2\7\u\y\c\5\r\9\a\q\m\p\4\8\m\r\f\w\w\c\2\s\g\f\v\2\g\l\m\2\8\8\a\6\7\v\h\8\2\1\k\c\d\v\n\b\6\0\g\g\i\l\q\3\q\3\z\l\7\e\1\z\f\k\j\i\i\p\j\b\u\x\l\0\w\q\y\x\z\8\3\w\d\n\i\n\0\e\e\p\r\d\w\z\1\r\f\d\7\p\e\z\u\n\a\b\4\z\z\p\1\9\0\i\5\3\d\e\o\0\g\b\6\7\a\g\7\x\7\y\e\j\8\2\n\u\e\y\t\w\o\8\x\a\2\q\n\n\r\p\0\j\t\u\j\m\s\o\f\g\r\f\a\t\n\1\1\g\t\x\4\z\s\q\p\d\u\x\m\h\j\6\s\f\i\c\o\r\i\2\i\1\h\6\z\g\t\p\j\q\k\8\0\n\e\d\j\2\q\j\n\k\e\e\n\n\f\6\d\l\y\w\r\8\v\t\b\8\4\5\0\o\2\k\u\s\t\e\c\2\k\v\m\1\w\m\p\b\f\q\d\s\p\7\p\l\w\0\z\e\4\m\a\g\j\3\5\s\9\d\y\y\w\d\g\u\f\s\7\8\3\l\d\a\h\o\z\5\n\y\k\o\5\7\p\5\b\x\e\p\w\a\8\w\4\l\k\0\w\h\y\o\g\u\n\v\f\p\v\w\1\6\e\i\y\g\i\y\7\6\1\v\u\2\c\n\5\n\3\b\l\z\h\x\j\z\5\v\3\o\t\2\y\5\6\g\6\y\6\b\0\1\v\c\1\z\b\i\f\6\1\d\o\a\1\0\6\x\u\f\8\e\j\n\h\9\j\2\9\c\t\5\n\8\3\n\i\e\0\m\q\v\b\7\3\v\9\g\e\n\k\1\a\p\v\s\x\v\n\s\x\q\m\t\c\k\c\l\p\x\v\3\h\f\a\3\z\g\z\f\5\2\j\6\s\c\1\t\r\4\1\t\u\e\l\w\1\u\t\4\4\3\9\j\c\0\w\m\n\y\x\m\g\s\a\d\w\a\d\m\w\w\n\5\x\z\g\p\y\8\3\f\e\t\s\i\h\3\f\p\b\g\t\k\6\p\8\i\m\9\s\0\n\8\j\l\d\u\o\z\x\4\p\f\k\c\5\y\b\3\g\8\e\4\x\c\w\m\t\m\q\7\l\x\w\i\i\6\m\j\2\4\2\v\w\p\i\2\y\m\s\p\n\u\t\q\d\2\c\d\9\y\3\b\1\6\6\7\y\p\7\0\x\w\l\z\z\m\w\h\1\w\n\q\m\0\d\1\x\c\7\d\k\i\o\c\v\8\k\1\5\n\w\9\t\q\y\6\v\4\2\v\k\t\y\p\v\c\k\1\h\u\w\f\3\p\e\q\u\n\c\f\9\9\5\a\5\t\o\p\w\l\b\8\4\m\w\g\o\9\v\0\0\j\z\v\s\t\v\d\r\4\h\8\0\c\l\q\0\d\a\0\7\j\v\k\q\n\e\q\g\i\1\v\8\q\h\q\b\i\6\r\s\2\4\2\m\h\5\q\g\r\p\j\c\1\3\0\8\q\u\q\e\p\5\j\4\y\v\b\9\d\7\n\4\m\l\l\r\4\w\v\l\4\v\f\o\e\k\0\8\h\w\l\p\9\b\p\x\c\l\9\5\p\n\3\q\x\x\m\0\1\3\3\8\k\z\y\l\i\y\t\x\c\p\p\a\t\r\1\1\o\1\9\q\5\5\5\x\j\6\9\9\q\l\f\o\b\9\o\s\l\x\l\7\n\t\9\3\4\w\i\z\a\0\5\n\j\5\n\m\b\e\5\d\t\p\x\u\4\p\0\o\f\l\i\e\v\3\d\e\3\h\2\6\6\7\o\y\7\w\k\c\d\w\c\e\t\l\3\j\9\7\c\w\i\9\p\5\l\5\1\c\d\c\9\o\h\k\q\y\i\4\y\n\g\u\w\i\k\l\w\h\p\d\7\r\0\l\l\b\9\4\v\z\c\b\z\5\5\5\3\x\q\f\s\8\t\2\2\6\7\x\s\9\x\k\3\j\x\s\6\1\x\i\a\e\v\4\p\r\5\7\v\b\r\m\v\6\v\o\s\k\8\e\c\v\i\b\4\5\b\r\4\o\y\r\7\8\3\f\8\w\u\p\o\9\e\9\4\v\0\b\m\f\v\p\z\j\i\8\3\s\3\2\k\3\z\8\h\p\a\r\b\e\p\4\z\x\b\t\c\u\m\c\a\w\t\v\f\h\7\t\t\9\r\x\z\e\o\c\i\w\7\z\n\5\h\2\n\a\h\q\e\2\w\p\c\i\p\1\m\k\o\c\7\z\x\8\g\1\4\m\4\v\y\5\3\3\z\f\s\y\s\i\i\p\y\l\q\3\5\5\j\h\d\v\y\1\p\6\q\w\l\3\d\3\z\x\m\5\r\2\3\m\0\h\7\d\g\4\c\v\6\u\2\z\t\8\f\4\p\o\7\j\1\q\t\a\9\z\x\7\c\2\q\o\v\x\4\1\q\5\u\0\7\r\y\w\2\w\r\d\4\n\9\g\1\m\d\0\3\4\0\c\q\c\f\r\m\5\z\r\m\j\o\o\0\t\v\1\i\e\m\2\4\x\7\o\d\r\d\t\t\n\q\d\4\i\7\s\a\b\r\4\7\k\y\h\4\c\x\e\h\d\0\8\d\2\i\q\q\c\u\r\7\a\8\c\i\y\5\n\4\l\c\b\7\5\8\4\9\u\3\0\p\h\f\q\b\i\v\k\m\3\8\m\k\t\q\l\f\k\i\p\9\y\7\t\g\1\l\4\c\w\d\w\x\g\1\3\t\w\q\a\i\v\r\u\s\w\c\r\u\c\l\u\u\8\g\6\v\0\x\w\q\k\z\7\2\j\7\t\y\u\w\b\a\e\l\w\c\i\r\x\o\f\x\r\y\r\b\f\7\s\5\8\e\1\x\0\a\j\2\l\7\m\y\q\q\q\j\w\d\5\l\g\6\9\w\6\c\q\6\g\u\h\z\3\1\o\1\t\4\o\k\r\o\v\2\k\s\0\c\r\h\r\q\h\3\8\u\8\3\c\7\9\i\9\5\j\u\z\3\y\9\d\e\c\t\c\n\y\6\f\e\b\s\z\h\7\6\k\m\w\7\e\k\7\h\h\u\x\v\2\3\6\6\g\i\v\g\3\h\w\l\u\m\3\u\q\c\o\g\x\n\s\v\n\y\l\p\2\r\7\g\0\z\q\v\8\9\m\a\e\1\a\f\a\o\m\k\3\0\2\d\c\7\o\1\l\8\m\m\n\r\q\g\f\x\h\2\n\x\e\p\v\i\x\d\a\7\w\0\h\z\x\x\s\1\p\j\o\2\d\b\p\0\k\6\u\g\0\s\u\7\k\2\z\o\1\h\2\3\c\f\g\i\n\3\g\o\x\e\3\h\1\5\s\9\x\g\o\o\9\a\y\b\o\b\o\5\l\j\0\l\d\1\r\o\3\l\f\k\y\8\7\i\d\d\l\j\f\n\4\l\u\q\1\6\j\m\f\i\f\r\6\4\k\l\z\w\f\l\1\b\u\m\i\v\k\5\b\h\o\g\7\4\3\u\b\k\i\z\7\9\6\z\w\a\0\i\6\6\w\6\2\e\v\c\i\w\s\q\f\j\p\v\0\l\q\q\x\4\o\y\9\n\2\c\d\4\f\u\9\l\5\9\j\d\r\z\3\h\s\y\8\f\b\j\b\p\a\c\9\p\v\5\r\o\y\d\l\m\i\v\i\0\2\r\7\9\r\m\d\e\m\7\u\m\7\j\1\5\h\1\q\r\z\1\6\c\x\a\0\b\8\f\e\i\f\e\6\q\3\1\i\1\s\8\a\z\4\1\d\0\1\y\n\2\j\u\s\k\1\j\6\u\9\9\k\r\g\5\6\h\l\w\7\v\j\6\z\s\8\p\7\6\m\y\8\1\e\u\b\w\b\g\i\c\a\3\3\h\d\m\4\1\o\e\7\4\o\g\j\k\p\1\z\m\6\d\p\k\g\j\x\x\d\l\r\u\5\1\y\c\3\n\f\g\f\m\r\4\2\y\5\5\r\9\x\x\h\z\l\z\8\k\g\6\u\8\z\0\2\6\i\b\x\v\8\y\f\h\2\w\3\v\m\e\b\t\o\c\z\1\a\m\6\a\m\0\u\b\8\n\r\q\c\l\6\t\g\4\6\j\u\0\x\i\c\a\l\f\e\7\s\f\s\7\q\5\4\j\m\w\f\n\x\r\8\b\1\f\m\u\f\5\x\9\4\5\7\a\8\g\y\n\e\v\m\d\a\c\f\w\d\q\z\r\4\h\y\q\2\6\x\p\1\b\e\7\9\h\e\h\5\7\l\x\b\n\4\y\x\a\b\y\q\l\b\7\r\2\7\w\v\r\w\v\b\4\x\9\s\2\5\q\3\0\n\y\1\1\9\w\a\z\s\q\8\1\j\y\t\b\8\x\b\m\h\x\9\0\m\t\s\e\a\g\o\e\0\2\x\2\r\2\t\n\q\g\f\c\3\x\r\f\8\0\7\1\5\b\2\n\c\l\k\q\a\3\f\r\g\7\o\6\l\k\o\2\3\4\k\7\2\9\t\k\u\0\5\j\y\x\o\3\6\i\j\b\q\t\r\x\t\9\i\b\o\8\f\k\y\l\m\g\p\u\m\d\r\1\d\p\7\9\t\h\o\j\k\4\h\f\c\z\3\6\9\7\p\k\2\6\m\f\i\n\2\v\1\2\6\y\9\d\g\m\k\v\1\k\b\x\t\8\p\3\l\g\q\w\d\y\o\8\m\d\z\j\2\9\5\6\v\h\4\7\l\h\3\g\f\e\n\y\p\p\x\9\3\2\9\v\z\q\2\e\1\9\r\2\9\e\f\u\e\i\8\l\4\u\s\s\z\a\a\y\c\f\d\t\m\9\9\j\b\n\9\4\t\0\q\0\q\2\b\f\1\x\1\u\r\n\c\b\h\f\t\7\g\n\v\5\m\l\1\o\c\v\3\t\3\j\y\7\0\t\a\1\1\c\1\h\3\2\w\s\s\c\a\5\i\e\q\b\m\4\y\w\0\j\m\x\6\t\l\1\d\s\h\j\k\1\a\a\1\i\k\p\1\w\s\t\z\n\5\o\v\8\v\f\4\h\g\a\9\d\h\1\x\h\k\s\5\r\5\c\9\u\a\f\b\f\6\j\a\4\l\9\3\c\6\h\s\t\g\h\0\a\l\x\1\7\u\g\u\a\i\1\w\r\r\i\o\e\j\o\8\7\a\m\9\8\5\3\p\9\1\3\4\4\8\i\n\m\c\x\w\l\q\9\n\s\a\m\m\n\f\a\j\d\a\s\t\1\1\r\x\r\p\v\w\n\q\5\u\p\e\3\i\u\m\1\5\9\z\a\5\1\v\y\d\j\n\v\g\a\g\v\8\5\i\d\4\1\u\4\5\m\l\0\b\x\f\y\k\c\l\r\u\s\r\s\5\7\6\y\q\h\k\d\n\r\z\e\u\z\w\e\z\a\8\v\d\x\h\k\0\n\w\v\3\j\w\d\6\l\7\1\1\g\a\r\z\e\h\h\s\a\2\e\n\y\r\f\j\g\p\r\b\d\3\v\w\1\v\4\l\g\a\y\e\m\g\k\4\9\n\b\c\i\4\0\6\5\b\2\x\v\b\i\n\8\f\7\g\2\p\z\x\6\j\k\9\l\e\l\9\h\b\y\2\i\0\h\j\q\k\o\9\y\0\w\3\b\x\y\u\s\m\7\q\i\c\b\r\y\k\7\t\7\8\6\k\w\9\5\d\j\x\p\1\g\o\u\t\f\f\q\v\u\r\j\j\l\m\s\c\y\n\t\j\8\6\r\y\8\0\k\f\2\y\q\o\g\o\r\o\1\9\n\l\x\j\t\4\g\h\f\7\r\6\p\e\q\3\q\v\q\w\5\i\r\p\w\s\p\3\b\m\d\i\k\n\x\u\5\g\i\7\g\u\h\b\p\h\6\y\o\d\f\1\4\7\s\c\3\e\6\n\z\f\h\8\8\g\l\c\a\d\h\y\b\y\j\p\s\8\y\b\l\l\3\a\j\0\8\s\h\b\u\j\8\k\i\5\s\t\8\r\r\m\p\3\e\y\p\8\9\3\6\q\8\f\1\g\o\9\p\l\z\z\7\q\4\z\m\n\1\2\q\r\t\8\t\x\t\g\x\g\w\8\0\q\s\6\w\n\3\n\m\g\x\8\a\3\w\j\u\n\2\a\8\n\0\v\1\i\a\f\a\r\3\m\1\3\q\y\2\j\h\g\j\k\n\w\d\j\r\e\z\d\x\g\h\u\x\z\q\j\q\x\j\6\o\1\5\3\a\w\a\e\s\r\x\m\p\a\b\r\t\e\7\3\b\k\a\p\p\b\h\z\d\2\i\p\6\q\y\x\v\j\u\8\c\a\i\5\v\q\6\o\z\u\b\s\b\n\q\l\o\f\k\8\x\n\y\2\k\4\9\n\9\q\n\s\w\n\w\q\i\a\f\v\5\t\1\5\b\0\4\t\c\x\s\6\4\j\o\g\c\s\f\k\n\x\z\k\2\t\d\n\r\f\7\3\s\e\v\f\r\g\m\2\x\u\d\6\7\n\z\1\j\9\m\x\0\o\o\z\6\r\q\d\v\p\q\2\y\l\0\7\9\s\8\o\q\y\j\m\b\0\v\5\2\n\0\4\3\v\p\r\9\d\q\g\9\w\9\n\c\x\2\0\7\b\w\8\e\o\t\v\d\l\x\d\x\s\x\a\d\q\o\b\v\k\0\9\m\k\7\9\z\n\z\g\7\z\o\b\l\e\5\r\p\r\l\m\x\t\3\v\p\3\0\p\c\s\p\v\9\m\o\w\4\7\4\9\3\7\d\h\x\h\p\y\c\t\3\2\r\a\d\z\z\7\b\n\b\x\m\g\i\t\e\e\1\8\e\c\p\4\r\6\e\y\d\9\1\v\0\k\8\9\7\f\v\q\0\b\z\j\g\9\y\f\5\k\6\7\z\l\d\e\c\3\6\p\q\y\q\7\o\b\3\x\w\g\i\u\h\c\r\l\v\a\m\x\u\i\i\9\c\o\4\o\6\k\2\h\g\n\l\z\4\l\l\k\p\5\7\q\w\8\y\y\4\j\w\k\g\9\7\5\3\5\s\t\x\p\q\s\j\l\p\m\6\j\b\j\2\9\a\d\e\v\9\7\d\r\c\f\d\o\0\w\u\b\w\9\9\v\b\2\2\k\3\y\l\c\2\l\t\g\t\d\w\r\b\k\6\2\o\m\q\5\4\m\e\2\c\7\m\l\a\z\o\9\r\k\e\r\k\b\b\k\p\c\8\0\l\q\n\0\o\h\9\h\w\0\e\i\1\s\x\t\y\x\f\v\q\s\m\d\0\y\h\9\8\t\b\8\d\y\2\1\6\p\p\8\h\e\r\j\q\i\f\s\i\t\p\2\r\b\r\e\9\5\i\3\b\o\9\e\n\p\z\m\j\t\g\3\1\r\a\s\q\3\h\7\h\v\1\p\p\z\w\y\1\y\e\5\5\7\c\y\w\t\0\9\f\5\e\3\t\q\l\h\x\x\o\c\w\0\s\v\v\5\r\d\g\n\1\u\c\c\k\1\u\z\h\u\r\9\a\8\d\x\c\1\j\f\x\3\u\8\7\a\e\q\8\1\z\7\q\8\b\n\w\a\8\o\1\m\w\7\5\1\8\3\z\1\v\e\a\m\j\m\k\4\w\g\y\r\4\n\4\2\o\b\2\u\o\x\6\2\r\c\6\k\5\i\x\s\p\t\8\6\8\9\0\a\5\6\7\6\r\m\j\m\c\1\z\j\1\k\1\5\h\o\d\o\p\7\q\e\a\o\o\g\m\1\1\a\b\l\z\o\9\o\s\m\z\y\f\4\0\5\7\z\1\i\8\f\6\q\8\w\p\u\v\v\4\x\7\f\u\o\3\j\v\2\g\h\t\w\a\q\e\z\5\1\x\l\c\r\1\5 ]] 00:27:16.253 00:27:16.253 real 0m3.371s 00:27:16.253 user 0m2.811s 00:27:16.253 sys 0m0.453s 00:27:16.253 13:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.253 13:12:34 -- common/autotest_common.sh@10 -- # set +x 00:27:16.253 ************************************ 00:27:16.253 END TEST dd_rw_offset 00:27:16.253 ************************************ 00:27:16.253 13:12:35 -- dd/basic_rw.sh@1 -- # cleanup 00:27:16.253 13:12:35 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:27:16.253 13:12:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:16.253 13:12:35 -- dd/common.sh@11 -- # local nvme_ref= 00:27:16.253 13:12:35 -- dd/common.sh@12 -- # local size=0xffff 00:27:16.253 13:12:35 -- dd/common.sh@14 -- # local bs=1048576 00:27:16.253 13:12:35 -- dd/common.sh@15 -- # local count=1 00:27:16.253 13:12:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:16.253 13:12:35 -- dd/common.sh@18 -- # gen_conf 00:27:16.253 13:12:35 -- dd/common.sh@31 -- # xtrace_disable 00:27:16.253 13:12:35 -- common/autotest_common.sh@10 -- # set +x 00:27:16.253 [2024-06-11 13:12:35.070400] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:16.253 [2024-06-11 13:12:35.070713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138103 ] 00:27:16.253 { 00:27:16.253 "subsystems": [ 00:27:16.253 { 00:27:16.253 "subsystem": "bdev", 00:27:16.253 "config": [ 00:27:16.253 { 00:27:16.253 "params": { 00:27:16.253 "trtype": "pcie", 00:27:16.253 "traddr": "0000:00:06.0", 00:27:16.253 "name": "Nvme0" 00:27:16.253 }, 00:27:16.253 "method": "bdev_nvme_attach_controller" 00:27:16.253 }, 00:27:16.253 { 00:27:16.253 "method": "bdev_wait_for_examine" 00:27:16.253 } 00:27:16.253 ] 00:27:16.253 } 00:27:16.253 ] 00:27:16.253 } 00:27:16.512 [2024-06-11 13:12:35.222657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.771 [2024-06-11 13:12:35.397548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.962  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:17.962 00:27:17.962 13:12:36 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:17.962 ************************************ 00:27:17.962 END TEST spdk_dd_basic_rw 00:27:17.962 ************************************ 00:27:17.962 00:27:17.962 real 0m39.977s 00:27:17.962 user 0m33.007s 00:27:17.962 sys 0m5.430s 00:27:17.962 13:12:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.962 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.962 13:12:36 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:17.962 13:12:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.962 13:12:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.962 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.962 ************************************ 00:27:17.962 START TEST spdk_dd_posix 00:27:17.962 ************************************ 00:27:17.962 13:12:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:17.962 * Looking for test storage... 00:27:17.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:17.962 13:12:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:17.962 13:12:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.962 13:12:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.962 13:12:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.962 13:12:36 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.962 13:12:36 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.962 13:12:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.962 13:12:36 -- paths/export.sh@5 -- # export PATH 00:27:17.962 13:12:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.962 13:12:36 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:27:17.962 13:12:36 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:27:17.962 13:12:36 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:27:17.962 13:12:36 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:27:17.962 13:12:36 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:17.962 13:12:36 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:17.962 13:12:36 -- dd/posix.sh@130 -- # tests 00:27:17.962 13:12:36 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:27:17.962 * First test run, using AIO 00:27:17.962 13:12:36 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:27:17.962 13:12:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.962 13:12:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.962 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.962 ************************************ 00:27:17.962 START TEST dd_flag_append 00:27:17.962 ************************************ 00:27:17.962 13:12:36 -- common/autotest_common.sh@1104 -- # append 00:27:17.962 13:12:36 -- dd/posix.sh@16 -- # local dump0 00:27:17.962 13:12:36 -- dd/posix.sh@17 -- # local dump1 00:27:17.962 13:12:36 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:17.962 13:12:36 -- dd/common.sh@98 -- # xtrace_disable 00:27:17.962 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.962 13:12:36 -- dd/posix.sh@19 -- # dump0=u7ax85s5op1j74y3bx8lh25iyef6x2v3 00:27:17.962 13:12:36 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:17.962 13:12:36 -- dd/common.sh@98 -- # xtrace_disable 00:27:17.962 13:12:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.962 13:12:36 -- dd/posix.sh@20 -- # dump1=vokbwfeaex7mrmdajc7afqqlep07bu5b 00:27:17.962 13:12:36 -- dd/posix.sh@22 -- # printf %s u7ax85s5op1j74y3bx8lh25iyef6x2v3 00:27:17.962 13:12:36 -- dd/posix.sh@23 -- # printf %s vokbwfeaex7mrmdajc7afqqlep07bu5b 00:27:17.962 13:12:36 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:18.220 [2024-06-11 13:12:36.835060] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:18.221 [2024-06-11 13:12:36.835636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138206 ] 00:27:18.221 [2024-06-11 13:12:37.002356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.479 [2024-06-11 13:12:37.163332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.673  Copying: 32/32 [B] (average 31 kBps) 00:27:19.673 00:27:19.673 13:12:38 -- dd/posix.sh@27 -- # [[ vokbwfeaex7mrmdajc7afqqlep07bu5bu7ax85s5op1j74y3bx8lh25iyef6x2v3 == \v\o\k\b\w\f\e\a\e\x\7\m\r\m\d\a\j\c\7\a\f\q\q\l\e\p\0\7\b\u\5\b\u\7\a\x\8\5\s\5\o\p\1\j\7\4\y\3\b\x\8\l\h\2\5\i\y\e\f\6\x\2\v\3 ]] 00:27:19.673 00:27:19.673 real 0m1.611s 00:27:19.673 user 0m1.259s 00:27:19.673 sys 0m0.213s 00:27:19.673 13:12:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.673 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:27:19.673 ************************************ 00:27:19.673 END TEST dd_flag_append 00:27:19.673 ************************************ 00:27:19.673 13:12:38 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:27:19.673 13:12:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:19.673 13:12:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:19.673 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:27:19.673 ************************************ 00:27:19.673 START TEST dd_flag_directory 00:27:19.673 ************************************ 00:27:19.673 13:12:38 -- common/autotest_common.sh@1104 -- # directory 00:27:19.673 13:12:38 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:19.673 13:12:38 -- common/autotest_common.sh@640 -- # local es=0 00:27:19.673 13:12:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:19.673 13:12:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:19.673 13:12:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:19.673 13:12:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:19.673 13:12:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:19.673 13:12:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:19.673 13:12:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:19.673 13:12:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:19.673 13:12:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:19.673 13:12:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:19.673 [2024-06-11 13:12:38.501028] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:19.673 [2024-06-11 13:12:38.501370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138246 ] 00:27:19.931 [2024-06-11 13:12:38.667019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.188 [2024-06-11 13:12:38.839517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.446 [2024-06-11 13:12:39.105227] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:20.446 [2024-06-11 13:12:39.105457] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:20.446 [2024-06-11 13:12:39.105541] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:21.012 [2024-06-11 13:12:39.690844] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:21.270 13:12:40 -- common/autotest_common.sh@643 -- # es=236 00:27:21.270 13:12:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:21.270 13:12:40 -- common/autotest_common.sh@652 -- # es=108 00:27:21.270 13:12:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:21.270 13:12:40 -- common/autotest_common.sh@660 -- # es=1 00:27:21.270 13:12:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:21.270 13:12:40 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:21.270 13:12:40 -- common/autotest_common.sh@640 -- # local es=0 00:27:21.270 13:12:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:21.270 13:12:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:21.270 13:12:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:21.270 13:12:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:21.270 13:12:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:21.270 13:12:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:21.270 13:12:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:21.270 13:12:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:21.270 13:12:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:21.270 13:12:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:21.270 [2024-06-11 13:12:40.084506] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:21.270 [2024-06-11 13:12:40.085087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138274 ] 00:27:21.529 [2024-06-11 13:12:40.250848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.787 [2024-06-11 13:12:40.440908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.045 [2024-06-11 13:12:40.698527] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:22.045 [2024-06-11 13:12:40.698833] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:22.045 [2024-06-11 13:12:40.698891] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:22.613 [2024-06-11 13:12:41.286656] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:22.871 ************************************ 00:27:22.871 END TEST dd_flag_directory 00:27:22.871 ************************************ 00:27:22.871 13:12:41 -- common/autotest_common.sh@643 -- # es=236 00:27:22.871 13:12:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:22.871 13:12:41 -- common/autotest_common.sh@652 -- # es=108 00:27:22.871 13:12:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:22.871 13:12:41 -- common/autotest_common.sh@660 -- # es=1 00:27:22.871 13:12:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:22.871 00:27:22.871 real 0m3.194s 00:27:22.871 user 0m2.553s 00:27:22.871 sys 0m0.436s 00:27:22.871 13:12:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.871 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:22.871 13:12:41 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:27:22.871 13:12:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:22.871 13:12:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:22.871 13:12:41 -- common/autotest_common.sh@10 -- # set +x 00:27:22.871 ************************************ 00:27:22.871 START TEST dd_flag_nofollow 00:27:22.871 ************************************ 00:27:22.871 13:12:41 -- common/autotest_common.sh@1104 -- # nofollow 00:27:22.871 13:12:41 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:22.871 13:12:41 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:22.871 13:12:41 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:22.871 13:12:41 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:22.871 13:12:41 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:22.871 13:12:41 -- common/autotest_common.sh@640 -- # local es=0 00:27:22.871 13:12:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:22.871 13:12:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.871 13:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:22.871 13:12:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.871 13:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:22.871 13:12:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.871 13:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:22.871 13:12:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.871 13:12:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:22.871 13:12:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:23.130 [2024-06-11 13:12:41.740321] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:23.130 [2024-06-11 13:12:41.740599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138318 ] 00:27:23.130 [2024-06-11 13:12:41.895670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.389 [2024-06-11 13:12:42.065680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.652 [2024-06-11 13:12:42.330004] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:23.652 [2024-06-11 13:12:42.330295] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:23.652 [2024-06-11 13:12:42.330352] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:24.230 [2024-06-11 13:12:42.956696] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:24.488 13:12:43 -- common/autotest_common.sh@643 -- # es=216 00:27:24.488 13:12:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:24.488 13:12:43 -- common/autotest_common.sh@652 -- # es=88 00:27:24.488 13:12:43 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:24.488 13:12:43 -- common/autotest_common.sh@660 -- # es=1 00:27:24.488 13:12:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:24.488 13:12:43 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:24.488 13:12:43 -- common/autotest_common.sh@640 -- # local es=0 00:27:24.488 13:12:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:24.488 13:12:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:24.488 13:12:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:24.488 13:12:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:24.488 13:12:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:24.488 13:12:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:24.488 13:12:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:24.488 13:12:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:24.488 13:12:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:24.488 13:12:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:24.747 [2024-06-11 13:12:43.356401] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:24.747 [2024-06-11 13:12:43.357515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138345 ] 00:27:24.747 [2024-06-11 13:12:43.522646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.005 [2024-06-11 13:12:43.699069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.264 [2024-06-11 13:12:43.952639] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:25.264 [2024-06-11 13:12:43.952865] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:25.264 [2024-06-11 13:12:43.952925] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:25.831 [2024-06-11 13:12:44.551155] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:26.090 13:12:44 -- common/autotest_common.sh@643 -- # es=216 00:27:26.090 13:12:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:26.090 13:12:44 -- common/autotest_common.sh@652 -- # es=88 00:27:26.090 13:12:44 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:26.090 13:12:44 -- common/autotest_common.sh@660 -- # es=1 00:27:26.090 13:12:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:26.090 13:12:44 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:26.090 13:12:44 -- dd/common.sh@98 -- # xtrace_disable 00:27:26.090 13:12:44 -- common/autotest_common.sh@10 -- # set +x 00:27:26.090 13:12:44 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:26.349 [2024-06-11 13:12:44.958200] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:26.349 [2024-06-11 13:12:44.958565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138360 ] 00:27:26.349 [2024-06-11 13:12:45.124501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.608 [2024-06-11 13:12:45.301966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.802  Copying: 512/512 [B] (average 500 kBps) 00:27:27.802 00:27:27.802 ************************************ 00:27:27.802 END TEST dd_flag_nofollow 00:27:27.803 ************************************ 00:27:27.803 13:12:46 -- dd/posix.sh@49 -- # [[ t3xepz8b2k4utr9ycof893cq34d0bsuxjp136bmajol4n4s0pyt8kqvf7nwgjxklpe0l2evvmgyhvp3elxkocuobhy7k8tehhalnh8iycesevgrxiiwen3t6ahyzs3bnah9z5iutpngnbr3m4j5es22cqejgu0mylutuppj335ypsnss6wrcfy8uj6j58hdxy624lv6rqmpy01dj55rtxihpjh711aqmdl68p579nxdde4uwxmlvrwvlrajsmrs71eghaf8y59kv62gj475dsqcb6zaciovuuts24b3kr3z3l3q2eq8szqrf5c0y7ctolrzutzdvjbcrtzdfz07nrx5b8h1riw877eoxdtmgp2nasu372yc46ww8qbc2tph1r1jmjpfo3yk9rgb9scddzlbeinwh5j6rhl4oxtoc0358iigt4cbki7dzwoyvh52my42chcm3msi0s9kxwest7fgvu1qrs959b0z0lbul0c57mr2siz13p2mxbfzygi7i == \t\3\x\e\p\z\8\b\2\k\4\u\t\r\9\y\c\o\f\8\9\3\c\q\3\4\d\0\b\s\u\x\j\p\1\3\6\b\m\a\j\o\l\4\n\4\s\0\p\y\t\8\k\q\v\f\7\n\w\g\j\x\k\l\p\e\0\l\2\e\v\v\m\g\y\h\v\p\3\e\l\x\k\o\c\u\o\b\h\y\7\k\8\t\e\h\h\a\l\n\h\8\i\y\c\e\s\e\v\g\r\x\i\i\w\e\n\3\t\6\a\h\y\z\s\3\b\n\a\h\9\z\5\i\u\t\p\n\g\n\b\r\3\m\4\j\5\e\s\2\2\c\q\e\j\g\u\0\m\y\l\u\t\u\p\p\j\3\3\5\y\p\s\n\s\s\6\w\r\c\f\y\8\u\j\6\j\5\8\h\d\x\y\6\2\4\l\v\6\r\q\m\p\y\0\1\d\j\5\5\r\t\x\i\h\p\j\h\7\1\1\a\q\m\d\l\6\8\p\5\7\9\n\x\d\d\e\4\u\w\x\m\l\v\r\w\v\l\r\a\j\s\m\r\s\7\1\e\g\h\a\f\8\y\5\9\k\v\6\2\g\j\4\7\5\d\s\q\c\b\6\z\a\c\i\o\v\u\u\t\s\2\4\b\3\k\r\3\z\3\l\3\q\2\e\q\8\s\z\q\r\f\5\c\0\y\7\c\t\o\l\r\z\u\t\z\d\v\j\b\c\r\t\z\d\f\z\0\7\n\r\x\5\b\8\h\1\r\i\w\8\7\7\e\o\x\d\t\m\g\p\2\n\a\s\u\3\7\2\y\c\4\6\w\w\8\q\b\c\2\t\p\h\1\r\1\j\m\j\p\f\o\3\y\k\9\r\g\b\9\s\c\d\d\z\l\b\e\i\n\w\h\5\j\6\r\h\l\4\o\x\t\o\c\0\3\5\8\i\i\g\t\4\c\b\k\i\7\d\z\w\o\y\v\h\5\2\m\y\4\2\c\h\c\m\3\m\s\i\0\s\9\k\x\w\e\s\t\7\f\g\v\u\1\q\r\s\9\5\9\b\0\z\0\l\b\u\l\0\c\5\7\m\r\2\s\i\z\1\3\p\2\m\x\b\f\z\y\g\i\7\i ]] 00:27:27.803 00:27:27.803 real 0m4.866s 00:27:27.803 user 0m3.857s 00:27:27.803 sys 0m0.663s 00:27:27.803 13:12:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.803 13:12:46 -- common/autotest_common.sh@10 -- # set +x 00:27:27.803 13:12:46 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:27:27.803 13:12:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:27.803 13:12:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:27.803 13:12:46 -- common/autotest_common.sh@10 -- # set +x 00:27:27.803 ************************************ 00:27:27.803 START TEST dd_flag_noatime 00:27:27.803 ************************************ 00:27:27.803 13:12:46 -- common/autotest_common.sh@1104 -- # noatime 00:27:27.803 13:12:46 -- dd/posix.sh@53 -- # local atime_if 00:27:27.803 13:12:46 -- dd/posix.sh@54 -- # local atime_of 00:27:27.803 13:12:46 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:27.803 13:12:46 -- dd/common.sh@98 -- # xtrace_disable 00:27:27.803 13:12:46 -- common/autotest_common.sh@10 -- # set +x 00:27:27.803 13:12:46 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:27.803 13:12:46 -- dd/posix.sh@60 -- # atime_if=1718111565 00:27:27.803 13:12:46 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:27.803 13:12:46 -- dd/posix.sh@61 -- # atime_of=1718111566 00:27:27.803 13:12:46 -- dd/posix.sh@66 -- # sleep 1 00:27:29.178 13:12:47 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:29.178 [2024-06-11 13:12:47.684039] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:29.178 [2024-06-11 13:12:47.684430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138443 ] 00:27:29.178 [2024-06-11 13:12:47.852013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.437 [2024-06-11 13:12:48.070435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.631  Copying: 512/512 [B] (average 500 kBps) 00:27:30.631 00:27:30.631 13:12:49 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:30.632 13:12:49 -- dd/posix.sh@69 -- # (( atime_if == 1718111565 )) 00:27:30.632 13:12:49 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:30.632 13:12:49 -- dd/posix.sh@70 -- # (( atime_of == 1718111566 )) 00:27:30.632 13:12:49 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:30.632 [2024-06-11 13:12:49.373711] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:30.632 [2024-06-11 13:12:49.374150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138474 ] 00:27:30.890 [2024-06-11 13:12:49.539958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.890 [2024-06-11 13:12:49.714306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.083  Copying: 512/512 [B] (average 500 kBps) 00:27:32.083 00:27:32.342 13:12:50 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:32.342 ************************************ 00:27:32.342 END TEST dd_flag_noatime 00:27:32.342 ************************************ 00:27:32.342 13:12:50 -- dd/posix.sh@73 -- # (( atime_if < 1718111569 )) 00:27:32.342 00:27:32.342 real 0m4.346s 00:27:32.342 user 0m2.553s 00:27:32.342 sys 0m0.528s 00:27:32.342 13:12:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.342 13:12:50 -- common/autotest_common.sh@10 -- # set +x 00:27:32.342 13:12:50 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:27:32.342 13:12:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:32.342 13:12:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:32.342 13:12:50 -- common/autotest_common.sh@10 -- # set +x 00:27:32.342 ************************************ 00:27:32.342 START TEST dd_flags_misc 00:27:32.342 ************************************ 00:27:32.342 13:12:50 -- common/autotest_common.sh@1104 -- # io 00:27:32.342 13:12:51 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:32.342 13:12:51 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:32.342 13:12:51 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:32.342 13:12:51 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:32.342 13:12:51 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:32.342 13:12:51 -- dd/common.sh@98 -- # xtrace_disable 00:27:32.342 13:12:51 -- common/autotest_common.sh@10 -- # set +x 00:27:32.342 13:12:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:32.342 13:12:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:32.342 [2024-06-11 13:12:51.071999] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:32.342 [2024-06-11 13:12:51.072358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138510 ] 00:27:32.601 [2024-06-11 13:12:51.239014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.601 [2024-06-11 13:12:51.402471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.793  Copying: 512/512 [B] (average 500 kBps) 00:27:33.793 00:27:34.052 13:12:52 -- dd/posix.sh@93 -- # [[ xe2mz7t1667p63jl8y84686f3sjsad601hhaflpq87yjbkcfngow8oji59h4hvu1pzkdlvgrt7wv7vv04g17s9nhiltbq4f5vexemjnhjzuv49a38wnnifksfvixbjo7q5gwubj085y2yp0ompm9duc9a9201vycd0ckbw5eu8vp4abvu5yghz0tpr663zvgiy4aa195cikvw6lomd1nz77xmkrom3fnp20tnjjlqznvjcgxsrt8sgwwaiamvuu5lgh0wv4ib0q9k8jf58wc0ny53gpdoo9xi5pq4z2ouhvojxitvm8t8y7lpuke5pnoesxsoskh4di0p0wie0mjj6bz9v13u79zft6fphxm3h12txx2yp9jrom5rfv150lwb0vu37vzk94iq90o8nrleogjshl0eroj9nyzdkq20slc4ml00hz1mf12ex9012uc7xcm48widhjutpiun2ssvggnaxbqs3a49acegvrhuzkiawc8puwp5y8a0u6am8yq == \x\e\2\m\z\7\t\1\6\6\7\p\6\3\j\l\8\y\8\4\6\8\6\f\3\s\j\s\a\d\6\0\1\h\h\a\f\l\p\q\8\7\y\j\b\k\c\f\n\g\o\w\8\o\j\i\5\9\h\4\h\v\u\1\p\z\k\d\l\v\g\r\t\7\w\v\7\v\v\0\4\g\1\7\s\9\n\h\i\l\t\b\q\4\f\5\v\e\x\e\m\j\n\h\j\z\u\v\4\9\a\3\8\w\n\n\i\f\k\s\f\v\i\x\b\j\o\7\q\5\g\w\u\b\j\0\8\5\y\2\y\p\0\o\m\p\m\9\d\u\c\9\a\9\2\0\1\v\y\c\d\0\c\k\b\w\5\e\u\8\v\p\4\a\b\v\u\5\y\g\h\z\0\t\p\r\6\6\3\z\v\g\i\y\4\a\a\1\9\5\c\i\k\v\w\6\l\o\m\d\1\n\z\7\7\x\m\k\r\o\m\3\f\n\p\2\0\t\n\j\j\l\q\z\n\v\j\c\g\x\s\r\t\8\s\g\w\w\a\i\a\m\v\u\u\5\l\g\h\0\w\v\4\i\b\0\q\9\k\8\j\f\5\8\w\c\0\n\y\5\3\g\p\d\o\o\9\x\i\5\p\q\4\z\2\o\u\h\v\o\j\x\i\t\v\m\8\t\8\y\7\l\p\u\k\e\5\p\n\o\e\s\x\s\o\s\k\h\4\d\i\0\p\0\w\i\e\0\m\j\j\6\b\z\9\v\1\3\u\7\9\z\f\t\6\f\p\h\x\m\3\h\1\2\t\x\x\2\y\p\9\j\r\o\m\5\r\f\v\1\5\0\l\w\b\0\v\u\3\7\v\z\k\9\4\i\q\9\0\o\8\n\r\l\e\o\g\j\s\h\l\0\e\r\o\j\9\n\y\z\d\k\q\2\0\s\l\c\4\m\l\0\0\h\z\1\m\f\1\2\e\x\9\0\1\2\u\c\7\x\c\m\4\8\w\i\d\h\j\u\t\p\i\u\n\2\s\s\v\g\g\n\a\x\b\q\s\3\a\4\9\a\c\e\g\v\r\h\u\z\k\i\a\w\c\8\p\u\w\p\5\y\8\a\0\u\6\a\m\8\y\q ]] 00:27:34.052 13:12:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:34.052 13:12:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:34.052 [2024-06-11 13:12:52.712474] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:34.052 [2024-06-11 13:12:52.713480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138538 ] 00:27:34.052 [2024-06-11 13:12:52.882522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.310 [2024-06-11 13:12:53.054959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.508  Copying: 512/512 [B] (average 500 kBps) 00:27:35.508 00:27:35.508 13:12:54 -- dd/posix.sh@93 -- # [[ xe2mz7t1667p63jl8y84686f3sjsad601hhaflpq87yjbkcfngow8oji59h4hvu1pzkdlvgrt7wv7vv04g17s9nhiltbq4f5vexemjnhjzuv49a38wnnifksfvixbjo7q5gwubj085y2yp0ompm9duc9a9201vycd0ckbw5eu8vp4abvu5yghz0tpr663zvgiy4aa195cikvw6lomd1nz77xmkrom3fnp20tnjjlqznvjcgxsrt8sgwwaiamvuu5lgh0wv4ib0q9k8jf58wc0ny53gpdoo9xi5pq4z2ouhvojxitvm8t8y7lpuke5pnoesxsoskh4di0p0wie0mjj6bz9v13u79zft6fphxm3h12txx2yp9jrom5rfv150lwb0vu37vzk94iq90o8nrleogjshl0eroj9nyzdkq20slc4ml00hz1mf12ex9012uc7xcm48widhjutpiun2ssvggnaxbqs3a49acegvrhuzkiawc8puwp5y8a0u6am8yq == \x\e\2\m\z\7\t\1\6\6\7\p\6\3\j\l\8\y\8\4\6\8\6\f\3\s\j\s\a\d\6\0\1\h\h\a\f\l\p\q\8\7\y\j\b\k\c\f\n\g\o\w\8\o\j\i\5\9\h\4\h\v\u\1\p\z\k\d\l\v\g\r\t\7\w\v\7\v\v\0\4\g\1\7\s\9\n\h\i\l\t\b\q\4\f\5\v\e\x\e\m\j\n\h\j\z\u\v\4\9\a\3\8\w\n\n\i\f\k\s\f\v\i\x\b\j\o\7\q\5\g\w\u\b\j\0\8\5\y\2\y\p\0\o\m\p\m\9\d\u\c\9\a\9\2\0\1\v\y\c\d\0\c\k\b\w\5\e\u\8\v\p\4\a\b\v\u\5\y\g\h\z\0\t\p\r\6\6\3\z\v\g\i\y\4\a\a\1\9\5\c\i\k\v\w\6\l\o\m\d\1\n\z\7\7\x\m\k\r\o\m\3\f\n\p\2\0\t\n\j\j\l\q\z\n\v\j\c\g\x\s\r\t\8\s\g\w\w\a\i\a\m\v\u\u\5\l\g\h\0\w\v\4\i\b\0\q\9\k\8\j\f\5\8\w\c\0\n\y\5\3\g\p\d\o\o\9\x\i\5\p\q\4\z\2\o\u\h\v\o\j\x\i\t\v\m\8\t\8\y\7\l\p\u\k\e\5\p\n\o\e\s\x\s\o\s\k\h\4\d\i\0\p\0\w\i\e\0\m\j\j\6\b\z\9\v\1\3\u\7\9\z\f\t\6\f\p\h\x\m\3\h\1\2\t\x\x\2\y\p\9\j\r\o\m\5\r\f\v\1\5\0\l\w\b\0\v\u\3\7\v\z\k\9\4\i\q\9\0\o\8\n\r\l\e\o\g\j\s\h\l\0\e\r\o\j\9\n\y\z\d\k\q\2\0\s\l\c\4\m\l\0\0\h\z\1\m\f\1\2\e\x\9\0\1\2\u\c\7\x\c\m\4\8\w\i\d\h\j\u\t\p\i\u\n\2\s\s\v\g\g\n\a\x\b\q\s\3\a\4\9\a\c\e\g\v\r\h\u\z\k\i\a\w\c\8\p\u\w\p\5\y\8\a\0\u\6\a\m\8\y\q ]] 00:27:35.508 13:12:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:35.508 13:12:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:35.508 [2024-06-11 13:12:54.346869] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:35.508 [2024-06-11 13:12:54.347785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138566 ] 00:27:35.769 [2024-06-11 13:12:54.514310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.027 [2024-06-11 13:12:54.673501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.219  Copying: 512/512 [B] (average 125 kBps) 00:27:37.219 00:27:37.220 13:12:55 -- dd/posix.sh@93 -- # [[ xe2mz7t1667p63jl8y84686f3sjsad601hhaflpq87yjbkcfngow8oji59h4hvu1pzkdlvgrt7wv7vv04g17s9nhiltbq4f5vexemjnhjzuv49a38wnnifksfvixbjo7q5gwubj085y2yp0ompm9duc9a9201vycd0ckbw5eu8vp4abvu5yghz0tpr663zvgiy4aa195cikvw6lomd1nz77xmkrom3fnp20tnjjlqznvjcgxsrt8sgwwaiamvuu5lgh0wv4ib0q9k8jf58wc0ny53gpdoo9xi5pq4z2ouhvojxitvm8t8y7lpuke5pnoesxsoskh4di0p0wie0mjj6bz9v13u79zft6fphxm3h12txx2yp9jrom5rfv150lwb0vu37vzk94iq90o8nrleogjshl0eroj9nyzdkq20slc4ml00hz1mf12ex9012uc7xcm48widhjutpiun2ssvggnaxbqs3a49acegvrhuzkiawc8puwp5y8a0u6am8yq == \x\e\2\m\z\7\t\1\6\6\7\p\6\3\j\l\8\y\8\4\6\8\6\f\3\s\j\s\a\d\6\0\1\h\h\a\f\l\p\q\8\7\y\j\b\k\c\f\n\g\o\w\8\o\j\i\5\9\h\4\h\v\u\1\p\z\k\d\l\v\g\r\t\7\w\v\7\v\v\0\4\g\1\7\s\9\n\h\i\l\t\b\q\4\f\5\v\e\x\e\m\j\n\h\j\z\u\v\4\9\a\3\8\w\n\n\i\f\k\s\f\v\i\x\b\j\o\7\q\5\g\w\u\b\j\0\8\5\y\2\y\p\0\o\m\p\m\9\d\u\c\9\a\9\2\0\1\v\y\c\d\0\c\k\b\w\5\e\u\8\v\p\4\a\b\v\u\5\y\g\h\z\0\t\p\r\6\6\3\z\v\g\i\y\4\a\a\1\9\5\c\i\k\v\w\6\l\o\m\d\1\n\z\7\7\x\m\k\r\o\m\3\f\n\p\2\0\t\n\j\j\l\q\z\n\v\j\c\g\x\s\r\t\8\s\g\w\w\a\i\a\m\v\u\u\5\l\g\h\0\w\v\4\i\b\0\q\9\k\8\j\f\5\8\w\c\0\n\y\5\3\g\p\d\o\o\9\x\i\5\p\q\4\z\2\o\u\h\v\o\j\x\i\t\v\m\8\t\8\y\7\l\p\u\k\e\5\p\n\o\e\s\x\s\o\s\k\h\4\d\i\0\p\0\w\i\e\0\m\j\j\6\b\z\9\v\1\3\u\7\9\z\f\t\6\f\p\h\x\m\3\h\1\2\t\x\x\2\y\p\9\j\r\o\m\5\r\f\v\1\5\0\l\w\b\0\v\u\3\7\v\z\k\9\4\i\q\9\0\o\8\n\r\l\e\o\g\j\s\h\l\0\e\r\o\j\9\n\y\z\d\k\q\2\0\s\l\c\4\m\l\0\0\h\z\1\m\f\1\2\e\x\9\0\1\2\u\c\7\x\c\m\4\8\w\i\d\h\j\u\t\p\i\u\n\2\s\s\v\g\g\n\a\x\b\q\s\3\a\4\9\a\c\e\g\v\r\h\u\z\k\i\a\w\c\8\p\u\w\p\5\y\8\a\0\u\6\a\m\8\y\q ]] 00:27:37.220 13:12:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:37.220 13:12:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:37.220 [2024-06-11 13:12:56.033269] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:37.220 [2024-06-11 13:12:56.033728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138600 ] 00:27:37.478 [2024-06-11 13:12:56.198415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.736 [2024-06-11 13:12:56.357509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.930  Copying: 512/512 [B] (average 166 kBps) 00:27:38.930 00:27:38.931 13:12:57 -- dd/posix.sh@93 -- # [[ xe2mz7t1667p63jl8y84686f3sjsad601hhaflpq87yjbkcfngow8oji59h4hvu1pzkdlvgrt7wv7vv04g17s9nhiltbq4f5vexemjnhjzuv49a38wnnifksfvixbjo7q5gwubj085y2yp0ompm9duc9a9201vycd0ckbw5eu8vp4abvu5yghz0tpr663zvgiy4aa195cikvw6lomd1nz77xmkrom3fnp20tnjjlqznvjcgxsrt8sgwwaiamvuu5lgh0wv4ib0q9k8jf58wc0ny53gpdoo9xi5pq4z2ouhvojxitvm8t8y7lpuke5pnoesxsoskh4di0p0wie0mjj6bz9v13u79zft6fphxm3h12txx2yp9jrom5rfv150lwb0vu37vzk94iq90o8nrleogjshl0eroj9nyzdkq20slc4ml00hz1mf12ex9012uc7xcm48widhjutpiun2ssvggnaxbqs3a49acegvrhuzkiawc8puwp5y8a0u6am8yq == \x\e\2\m\z\7\t\1\6\6\7\p\6\3\j\l\8\y\8\4\6\8\6\f\3\s\j\s\a\d\6\0\1\h\h\a\f\l\p\q\8\7\y\j\b\k\c\f\n\g\o\w\8\o\j\i\5\9\h\4\h\v\u\1\p\z\k\d\l\v\g\r\t\7\w\v\7\v\v\0\4\g\1\7\s\9\n\h\i\l\t\b\q\4\f\5\v\e\x\e\m\j\n\h\j\z\u\v\4\9\a\3\8\w\n\n\i\f\k\s\f\v\i\x\b\j\o\7\q\5\g\w\u\b\j\0\8\5\y\2\y\p\0\o\m\p\m\9\d\u\c\9\a\9\2\0\1\v\y\c\d\0\c\k\b\w\5\e\u\8\v\p\4\a\b\v\u\5\y\g\h\z\0\t\p\r\6\6\3\z\v\g\i\y\4\a\a\1\9\5\c\i\k\v\w\6\l\o\m\d\1\n\z\7\7\x\m\k\r\o\m\3\f\n\p\2\0\t\n\j\j\l\q\z\n\v\j\c\g\x\s\r\t\8\s\g\w\w\a\i\a\m\v\u\u\5\l\g\h\0\w\v\4\i\b\0\q\9\k\8\j\f\5\8\w\c\0\n\y\5\3\g\p\d\o\o\9\x\i\5\p\q\4\z\2\o\u\h\v\o\j\x\i\t\v\m\8\t\8\y\7\l\p\u\k\e\5\p\n\o\e\s\x\s\o\s\k\h\4\d\i\0\p\0\w\i\e\0\m\j\j\6\b\z\9\v\1\3\u\7\9\z\f\t\6\f\p\h\x\m\3\h\1\2\t\x\x\2\y\p\9\j\r\o\m\5\r\f\v\1\5\0\l\w\b\0\v\u\3\7\v\z\k\9\4\i\q\9\0\o\8\n\r\l\e\o\g\j\s\h\l\0\e\r\o\j\9\n\y\z\d\k\q\2\0\s\l\c\4\m\l\0\0\h\z\1\m\f\1\2\e\x\9\0\1\2\u\c\7\x\c\m\4\8\w\i\d\h\j\u\t\p\i\u\n\2\s\s\v\g\g\n\a\x\b\q\s\3\a\4\9\a\c\e\g\v\r\h\u\z\k\i\a\w\c\8\p\u\w\p\5\y\8\a\0\u\6\a\m\8\y\q ]] 00:27:38.931 13:12:57 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:38.931 13:12:57 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:38.931 13:12:57 -- dd/common.sh@98 -- # xtrace_disable 00:27:38.931 13:12:57 -- common/autotest_common.sh@10 -- # set +x 00:27:38.931 13:12:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:38.931 13:12:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:38.931 [2024-06-11 13:12:57.646124] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:38.931 [2024-06-11 13:12:57.646459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138624 ] 00:27:39.189 [2024-06-11 13:12:57.798430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.189 [2024-06-11 13:12:57.978234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.383  Copying: 512/512 [B] (average 500 kBps) 00:27:40.383 00:27:40.384 13:12:59 -- dd/posix.sh@93 -- # [[ 8lwuocovhpep9oka4mnrp5n0n9crge92evudkmzxa4vnmfpqa67xxlu5b65b3oezvqyy89qeungom2eygxq1bddsg4scm2dqzzxbat0pi16i49cvslswlde1vixx0dv10wrvlit7ul2cujwd1rdepdo6r5alyuva9srmukx7uw5xbmayyk65v7d4cz3a3mccungrexne20saqpnvw40ffnfsrdy5vk7vxypclts3lq8p3d08zi0k7jq4lk35qmhlyntc2qga2mmsw7rnt67kobihhnb8qb5uh0dh7yev4buagqrgqm13bcil8i5ayfe5k9pixypmdeevwy9nwb1w88kipothjhwf1897rkimh3w4lcxsi27ljyulteif33t8npxe6zjqq7rtut014ngucx6c2tmvzeyrhgx96bvo0g4ansib3042hsl2niwlgr25xroxk9r9wnhka06kbswtm62i3zlyncd46gknohntor4ty9c0ybbo0p9d2siuhlt1 == \8\l\w\u\o\c\o\v\h\p\e\p\9\o\k\a\4\m\n\r\p\5\n\0\n\9\c\r\g\e\9\2\e\v\u\d\k\m\z\x\a\4\v\n\m\f\p\q\a\6\7\x\x\l\u\5\b\6\5\b\3\o\e\z\v\q\y\y\8\9\q\e\u\n\g\o\m\2\e\y\g\x\q\1\b\d\d\s\g\4\s\c\m\2\d\q\z\z\x\b\a\t\0\p\i\1\6\i\4\9\c\v\s\l\s\w\l\d\e\1\v\i\x\x\0\d\v\1\0\w\r\v\l\i\t\7\u\l\2\c\u\j\w\d\1\r\d\e\p\d\o\6\r\5\a\l\y\u\v\a\9\s\r\m\u\k\x\7\u\w\5\x\b\m\a\y\y\k\6\5\v\7\d\4\c\z\3\a\3\m\c\c\u\n\g\r\e\x\n\e\2\0\s\a\q\p\n\v\w\4\0\f\f\n\f\s\r\d\y\5\v\k\7\v\x\y\p\c\l\t\s\3\l\q\8\p\3\d\0\8\z\i\0\k\7\j\q\4\l\k\3\5\q\m\h\l\y\n\t\c\2\q\g\a\2\m\m\s\w\7\r\n\t\6\7\k\o\b\i\h\h\n\b\8\q\b\5\u\h\0\d\h\7\y\e\v\4\b\u\a\g\q\r\g\q\m\1\3\b\c\i\l\8\i\5\a\y\f\e\5\k\9\p\i\x\y\p\m\d\e\e\v\w\y\9\n\w\b\1\w\8\8\k\i\p\o\t\h\j\h\w\f\1\8\9\7\r\k\i\m\h\3\w\4\l\c\x\s\i\2\7\l\j\y\u\l\t\e\i\f\3\3\t\8\n\p\x\e\6\z\j\q\q\7\r\t\u\t\0\1\4\n\g\u\c\x\6\c\2\t\m\v\z\e\y\r\h\g\x\9\6\b\v\o\0\g\4\a\n\s\i\b\3\0\4\2\h\s\l\2\n\i\w\l\g\r\2\5\x\r\o\x\k\9\r\9\w\n\h\k\a\0\6\k\b\s\w\t\m\6\2\i\3\z\l\y\n\c\d\4\6\g\k\n\o\h\n\t\o\r\4\t\y\9\c\0\y\b\b\o\0\p\9\d\2\s\i\u\h\l\t\1 ]] 00:27:40.384 13:12:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:40.384 13:12:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:40.642 [2024-06-11 13:12:59.268271] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:40.642 [2024-06-11 13:12:59.268666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138652 ] 00:27:40.642 [2024-06-11 13:12:59.434819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.900 [2024-06-11 13:12:59.600123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.092  Copying: 512/512 [B] (average 500 kBps) 00:27:42.093 00:27:42.093 13:13:00 -- dd/posix.sh@93 -- # [[ 8lwuocovhpep9oka4mnrp5n0n9crge92evudkmzxa4vnmfpqa67xxlu5b65b3oezvqyy89qeungom2eygxq1bddsg4scm2dqzzxbat0pi16i49cvslswlde1vixx0dv10wrvlit7ul2cujwd1rdepdo6r5alyuva9srmukx7uw5xbmayyk65v7d4cz3a3mccungrexne20saqpnvw40ffnfsrdy5vk7vxypclts3lq8p3d08zi0k7jq4lk35qmhlyntc2qga2mmsw7rnt67kobihhnb8qb5uh0dh7yev4buagqrgqm13bcil8i5ayfe5k9pixypmdeevwy9nwb1w88kipothjhwf1897rkimh3w4lcxsi27ljyulteif33t8npxe6zjqq7rtut014ngucx6c2tmvzeyrhgx96bvo0g4ansib3042hsl2niwlgr25xroxk9r9wnhka06kbswtm62i3zlyncd46gknohntor4ty9c0ybbo0p9d2siuhlt1 == \8\l\w\u\o\c\o\v\h\p\e\p\9\o\k\a\4\m\n\r\p\5\n\0\n\9\c\r\g\e\9\2\e\v\u\d\k\m\z\x\a\4\v\n\m\f\p\q\a\6\7\x\x\l\u\5\b\6\5\b\3\o\e\z\v\q\y\y\8\9\q\e\u\n\g\o\m\2\e\y\g\x\q\1\b\d\d\s\g\4\s\c\m\2\d\q\z\z\x\b\a\t\0\p\i\1\6\i\4\9\c\v\s\l\s\w\l\d\e\1\v\i\x\x\0\d\v\1\0\w\r\v\l\i\t\7\u\l\2\c\u\j\w\d\1\r\d\e\p\d\o\6\r\5\a\l\y\u\v\a\9\s\r\m\u\k\x\7\u\w\5\x\b\m\a\y\y\k\6\5\v\7\d\4\c\z\3\a\3\m\c\c\u\n\g\r\e\x\n\e\2\0\s\a\q\p\n\v\w\4\0\f\f\n\f\s\r\d\y\5\v\k\7\v\x\y\p\c\l\t\s\3\l\q\8\p\3\d\0\8\z\i\0\k\7\j\q\4\l\k\3\5\q\m\h\l\y\n\t\c\2\q\g\a\2\m\m\s\w\7\r\n\t\6\7\k\o\b\i\h\h\n\b\8\q\b\5\u\h\0\d\h\7\y\e\v\4\b\u\a\g\q\r\g\q\m\1\3\b\c\i\l\8\i\5\a\y\f\e\5\k\9\p\i\x\y\p\m\d\e\e\v\w\y\9\n\w\b\1\w\8\8\k\i\p\o\t\h\j\h\w\f\1\8\9\7\r\k\i\m\h\3\w\4\l\c\x\s\i\2\7\l\j\y\u\l\t\e\i\f\3\3\t\8\n\p\x\e\6\z\j\q\q\7\r\t\u\t\0\1\4\n\g\u\c\x\6\c\2\t\m\v\z\e\y\r\h\g\x\9\6\b\v\o\0\g\4\a\n\s\i\b\3\0\4\2\h\s\l\2\n\i\w\l\g\r\2\5\x\r\o\x\k\9\r\9\w\n\h\k\a\0\6\k\b\s\w\t\m\6\2\i\3\z\l\y\n\c\d\4\6\g\k\n\o\h\n\t\o\r\4\t\y\9\c\0\y\b\b\o\0\p\9\d\2\s\i\u\h\l\t\1 ]] 00:27:42.093 13:13:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:42.093 13:13:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:42.093 [2024-06-11 13:13:00.913160] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:42.093 [2024-06-11 13:13:00.913802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138670 ] 00:27:42.351 [2024-06-11 13:13:01.078995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.609 [2024-06-11 13:13:01.245253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.807  Copying: 512/512 [B] (average 125 kBps) 00:27:43.807 00:27:43.807 13:13:02 -- dd/posix.sh@93 -- # [[ 8lwuocovhpep9oka4mnrp5n0n9crge92evudkmzxa4vnmfpqa67xxlu5b65b3oezvqyy89qeungom2eygxq1bddsg4scm2dqzzxbat0pi16i49cvslswlde1vixx0dv10wrvlit7ul2cujwd1rdepdo6r5alyuva9srmukx7uw5xbmayyk65v7d4cz3a3mccungrexne20saqpnvw40ffnfsrdy5vk7vxypclts3lq8p3d08zi0k7jq4lk35qmhlyntc2qga2mmsw7rnt67kobihhnb8qb5uh0dh7yev4buagqrgqm13bcil8i5ayfe5k9pixypmdeevwy9nwb1w88kipothjhwf1897rkimh3w4lcxsi27ljyulteif33t8npxe6zjqq7rtut014ngucx6c2tmvzeyrhgx96bvo0g4ansib3042hsl2niwlgr25xroxk9r9wnhka06kbswtm62i3zlyncd46gknohntor4ty9c0ybbo0p9d2siuhlt1 == \8\l\w\u\o\c\o\v\h\p\e\p\9\o\k\a\4\m\n\r\p\5\n\0\n\9\c\r\g\e\9\2\e\v\u\d\k\m\z\x\a\4\v\n\m\f\p\q\a\6\7\x\x\l\u\5\b\6\5\b\3\o\e\z\v\q\y\y\8\9\q\e\u\n\g\o\m\2\e\y\g\x\q\1\b\d\d\s\g\4\s\c\m\2\d\q\z\z\x\b\a\t\0\p\i\1\6\i\4\9\c\v\s\l\s\w\l\d\e\1\v\i\x\x\0\d\v\1\0\w\r\v\l\i\t\7\u\l\2\c\u\j\w\d\1\r\d\e\p\d\o\6\r\5\a\l\y\u\v\a\9\s\r\m\u\k\x\7\u\w\5\x\b\m\a\y\y\k\6\5\v\7\d\4\c\z\3\a\3\m\c\c\u\n\g\r\e\x\n\e\2\0\s\a\q\p\n\v\w\4\0\f\f\n\f\s\r\d\y\5\v\k\7\v\x\y\p\c\l\t\s\3\l\q\8\p\3\d\0\8\z\i\0\k\7\j\q\4\l\k\3\5\q\m\h\l\y\n\t\c\2\q\g\a\2\m\m\s\w\7\r\n\t\6\7\k\o\b\i\h\h\n\b\8\q\b\5\u\h\0\d\h\7\y\e\v\4\b\u\a\g\q\r\g\q\m\1\3\b\c\i\l\8\i\5\a\y\f\e\5\k\9\p\i\x\y\p\m\d\e\e\v\w\y\9\n\w\b\1\w\8\8\k\i\p\o\t\h\j\h\w\f\1\8\9\7\r\k\i\m\h\3\w\4\l\c\x\s\i\2\7\l\j\y\u\l\t\e\i\f\3\3\t\8\n\p\x\e\6\z\j\q\q\7\r\t\u\t\0\1\4\n\g\u\c\x\6\c\2\t\m\v\z\e\y\r\h\g\x\9\6\b\v\o\0\g\4\a\n\s\i\b\3\0\4\2\h\s\l\2\n\i\w\l\g\r\2\5\x\r\o\x\k\9\r\9\w\n\h\k\a\0\6\k\b\s\w\t\m\6\2\i\3\z\l\y\n\c\d\4\6\g\k\n\o\h\n\t\o\r\4\t\y\9\c\0\y\b\b\o\0\p\9\d\2\s\i\u\h\l\t\1 ]] 00:27:43.807 13:13:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:43.807 13:13:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:43.807 [2024-06-11 13:13:02.537107] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:43.807 [2024-06-11 13:13:02.538068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138694 ] 00:27:44.065 [2024-06-11 13:13:02.703959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.066 [2024-06-11 13:13:02.895585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.570  Copying: 512/512 [B] (average 250 kBps) 00:27:45.570 00:27:45.570 ************************************ 00:27:45.570 END TEST dd_flags_misc 00:27:45.570 ************************************ 00:27:45.570 13:13:04 -- dd/posix.sh@93 -- # [[ 8lwuocovhpep9oka4mnrp5n0n9crge92evudkmzxa4vnmfpqa67xxlu5b65b3oezvqyy89qeungom2eygxq1bddsg4scm2dqzzxbat0pi16i49cvslswlde1vixx0dv10wrvlit7ul2cujwd1rdepdo6r5alyuva9srmukx7uw5xbmayyk65v7d4cz3a3mccungrexne20saqpnvw40ffnfsrdy5vk7vxypclts3lq8p3d08zi0k7jq4lk35qmhlyntc2qga2mmsw7rnt67kobihhnb8qb5uh0dh7yev4buagqrgqm13bcil8i5ayfe5k9pixypmdeevwy9nwb1w88kipothjhwf1897rkimh3w4lcxsi27ljyulteif33t8npxe6zjqq7rtut014ngucx6c2tmvzeyrhgx96bvo0g4ansib3042hsl2niwlgr25xroxk9r9wnhka06kbswtm62i3zlyncd46gknohntor4ty9c0ybbo0p9d2siuhlt1 == \8\l\w\u\o\c\o\v\h\p\e\p\9\o\k\a\4\m\n\r\p\5\n\0\n\9\c\r\g\e\9\2\e\v\u\d\k\m\z\x\a\4\v\n\m\f\p\q\a\6\7\x\x\l\u\5\b\6\5\b\3\o\e\z\v\q\y\y\8\9\q\e\u\n\g\o\m\2\e\y\g\x\q\1\b\d\d\s\g\4\s\c\m\2\d\q\z\z\x\b\a\t\0\p\i\1\6\i\4\9\c\v\s\l\s\w\l\d\e\1\v\i\x\x\0\d\v\1\0\w\r\v\l\i\t\7\u\l\2\c\u\j\w\d\1\r\d\e\p\d\o\6\r\5\a\l\y\u\v\a\9\s\r\m\u\k\x\7\u\w\5\x\b\m\a\y\y\k\6\5\v\7\d\4\c\z\3\a\3\m\c\c\u\n\g\r\e\x\n\e\2\0\s\a\q\p\n\v\w\4\0\f\f\n\f\s\r\d\y\5\v\k\7\v\x\y\p\c\l\t\s\3\l\q\8\p\3\d\0\8\z\i\0\k\7\j\q\4\l\k\3\5\q\m\h\l\y\n\t\c\2\q\g\a\2\m\m\s\w\7\r\n\t\6\7\k\o\b\i\h\h\n\b\8\q\b\5\u\h\0\d\h\7\y\e\v\4\b\u\a\g\q\r\g\q\m\1\3\b\c\i\l\8\i\5\a\y\f\e\5\k\9\p\i\x\y\p\m\d\e\e\v\w\y\9\n\w\b\1\w\8\8\k\i\p\o\t\h\j\h\w\f\1\8\9\7\r\k\i\m\h\3\w\4\l\c\x\s\i\2\7\l\j\y\u\l\t\e\i\f\3\3\t\8\n\p\x\e\6\z\j\q\q\7\r\t\u\t\0\1\4\n\g\u\c\x\6\c\2\t\m\v\z\e\y\r\h\g\x\9\6\b\v\o\0\g\4\a\n\s\i\b\3\0\4\2\h\s\l\2\n\i\w\l\g\r\2\5\x\r\o\x\k\9\r\9\w\n\h\k\a\0\6\k\b\s\w\t\m\6\2\i\3\z\l\y\n\c\d\4\6\g\k\n\o\h\n\t\o\r\4\t\y\9\c\0\y\b\b\o\0\p\9\d\2\s\i\u\h\l\t\1 ]] 00:27:45.570 00:27:45.570 real 0m13.122s 00:27:45.570 user 0m10.371s 00:27:45.570 sys 0m1.656s 00:27:45.570 13:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.570 13:13:04 -- common/autotest_common.sh@10 -- # set +x 00:27:45.570 13:13:04 -- dd/posix.sh@131 -- # tests_forced_aio 00:27:45.570 13:13:04 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:27:45.570 * Second test run, using AIO 00:27:45.570 13:13:04 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:27:45.570 13:13:04 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:27:45.570 13:13:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:45.570 13:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:45.570 13:13:04 -- common/autotest_common.sh@10 -- # set +x 00:27:45.570 ************************************ 00:27:45.570 START TEST dd_flag_append_forced_aio 00:27:45.571 ************************************ 00:27:45.571 13:13:04 -- common/autotest_common.sh@1104 -- # append 00:27:45.571 13:13:04 -- dd/posix.sh@16 -- # local dump0 00:27:45.571 13:13:04 -- dd/posix.sh@17 -- # local dump1 00:27:45.571 13:13:04 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:45.571 13:13:04 -- dd/common.sh@98 -- # xtrace_disable 00:27:45.571 13:13:04 -- common/autotest_common.sh@10 -- # set +x 00:27:45.571 13:13:04 -- dd/posix.sh@19 -- # dump0=ur57hgftq9erxfri7vm9lxq9mw6g9d51 00:27:45.571 13:13:04 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:45.571 13:13:04 -- dd/common.sh@98 -- # xtrace_disable 00:27:45.571 13:13:04 -- common/autotest_common.sh@10 -- # set +x 00:27:45.571 13:13:04 -- dd/posix.sh@20 -- # dump1=tz9fp7700ff5iamhmlbbgx8rsntlv0g8 00:27:45.571 13:13:04 -- dd/posix.sh@22 -- # printf %s ur57hgftq9erxfri7vm9lxq9mw6g9d51 00:27:45.571 13:13:04 -- dd/posix.sh@23 -- # printf %s tz9fp7700ff5iamhmlbbgx8rsntlv0g8 00:27:45.571 13:13:04 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:45.571 [2024-06-11 13:13:04.243054] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:45.571 [2024-06-11 13:13:04.243410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138738 ] 00:27:45.571 [2024-06-11 13:13:04.400686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.830 [2024-06-11 13:13:04.571560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.059  Copying: 32/32 [B] (average 31 kBps) 00:27:47.059 00:27:47.059 ************************************ 00:27:47.059 END TEST dd_flag_append_forced_aio 00:27:47.059 ************************************ 00:27:47.059 13:13:05 -- dd/posix.sh@27 -- # [[ tz9fp7700ff5iamhmlbbgx8rsntlv0g8ur57hgftq9erxfri7vm9lxq9mw6g9d51 == \t\z\9\f\p\7\7\0\0\f\f\5\i\a\m\h\m\l\b\b\g\x\8\r\s\n\t\l\v\0\g\8\u\r\5\7\h\g\f\t\q\9\e\r\x\f\r\i\7\v\m\9\l\x\q\9\m\w\6\g\9\d\5\1 ]] 00:27:47.059 00:27:47.059 real 0m1.608s 00:27:47.059 user 0m1.273s 00:27:47.059 sys 0m0.201s 00:27:47.059 13:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.059 13:13:05 -- common/autotest_common.sh@10 -- # set +x 00:27:47.059 13:13:05 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:27:47.059 13:13:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:47.059 13:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.059 13:13:05 -- common/autotest_common.sh@10 -- # set +x 00:27:47.059 ************************************ 00:27:47.059 START TEST dd_flag_directory_forced_aio 00:27:47.059 ************************************ 00:27:47.059 13:13:05 -- common/autotest_common.sh@1104 -- # directory 00:27:47.059 13:13:05 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:47.059 13:13:05 -- common/autotest_common.sh@640 -- # local es=0 00:27:47.059 13:13:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:47.059 13:13:05 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:47.059 13:13:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:47.059 13:13:05 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:47.059 13:13:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:47.059 13:13:05 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:47.059 13:13:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:47.059 13:13:05 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:47.059 13:13:05 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:47.059 13:13:05 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:47.318 [2024-06-11 13:13:05.915750] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:47.318 [2024-06-11 13:13:05.916150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138785 ] 00:27:47.318 [2024-06-11 13:13:06.081276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.577 [2024-06-11 13:13:06.256780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.835 [2024-06-11 13:13:06.513325] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:47.835 [2024-06-11 13:13:06.513703] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:47.835 [2024-06-11 13:13:06.513762] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:48.403 [2024-06-11 13:13:07.111939] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:48.662 13:13:07 -- common/autotest_common.sh@643 -- # es=236 00:27:48.662 13:13:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:48.662 13:13:07 -- common/autotest_common.sh@652 -- # es=108 00:27:48.662 13:13:07 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:48.662 13:13:07 -- common/autotest_common.sh@660 -- # es=1 00:27:48.662 13:13:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:48.662 13:13:07 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:48.662 13:13:07 -- common/autotest_common.sh@640 -- # local es=0 00:27:48.662 13:13:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:48.662 13:13:07 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:48.662 13:13:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:48.662 13:13:07 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:48.662 13:13:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:48.662 13:13:07 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:48.662 13:13:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:48.662 13:13:07 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:48.662 13:13:07 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:48.662 13:13:07 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:48.662 [2024-06-11 13:13:07.501874] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:48.662 [2024-06-11 13:13:07.502302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138831 ] 00:27:48.921 [2024-06-11 13:13:07.660255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.180 [2024-06-11 13:13:07.840355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.438 [2024-06-11 13:13:08.109259] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:49.438 [2024-06-11 13:13:08.109517] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:49.438 [2024-06-11 13:13:08.109586] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:50.006 [2024-06-11 13:13:08.692774] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:50.264 ************************************ 00:27:50.264 END TEST dd_flag_directory_forced_aio 00:27:50.264 ************************************ 00:27:50.264 13:13:09 -- common/autotest_common.sh@643 -- # es=236 00:27:50.264 13:13:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:50.264 13:13:09 -- common/autotest_common.sh@652 -- # es=108 00:27:50.264 13:13:09 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:50.264 13:13:09 -- common/autotest_common.sh@660 -- # es=1 00:27:50.264 13:13:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:50.264 00:27:50.264 real 0m3.182s 00:27:50.264 user 0m2.505s 00:27:50.264 sys 0m0.468s 00:27:50.264 13:13:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.264 13:13:09 -- common/autotest_common.sh@10 -- # set +x 00:27:50.264 13:13:09 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:27:50.264 13:13:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:50.264 13:13:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:50.264 13:13:09 -- common/autotest_common.sh@10 -- # set +x 00:27:50.264 ************************************ 00:27:50.264 START TEST dd_flag_nofollow_forced_aio 00:27:50.264 ************************************ 00:27:50.264 13:13:09 -- common/autotest_common.sh@1104 -- # nofollow 00:27:50.264 13:13:09 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:50.264 13:13:09 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:50.264 13:13:09 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:50.264 13:13:09 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:50.264 13:13:09 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:50.264 13:13:09 -- common/autotest_common.sh@640 -- # local es=0 00:27:50.264 13:13:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:50.264 13:13:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:50.264 13:13:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.264 13:13:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:50.264 13:13:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.264 13:13:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:50.264 13:13:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.264 13:13:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:50.264 13:13:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:50.264 13:13:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:50.523 [2024-06-11 13:13:09.153636] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:50.523 [2024-06-11 13:13:09.154037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138869 ] 00:27:50.523 [2024-06-11 13:13:09.343794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.781 [2024-06-11 13:13:09.548854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.040 [2024-06-11 13:13:09.801330] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:51.040 [2024-06-11 13:13:09.801666] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:51.040 [2024-06-11 13:13:09.801728] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:51.606 [2024-06-11 13:13:10.384916] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:52.173 13:13:10 -- common/autotest_common.sh@643 -- # es=216 00:27:52.173 13:13:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:52.173 13:13:10 -- common/autotest_common.sh@652 -- # es=88 00:27:52.173 13:13:10 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:52.174 13:13:10 -- common/autotest_common.sh@660 -- # es=1 00:27:52.174 13:13:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:52.174 13:13:10 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:52.174 13:13:10 -- common/autotest_common.sh@640 -- # local es=0 00:27:52.174 13:13:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:52.174 13:13:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.174 13:13:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.174 13:13:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.174 13:13:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.174 13:13:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.174 13:13:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:52.174 13:13:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:52.174 13:13:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:52.174 13:13:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:52.174 [2024-06-11 13:13:10.807772] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:52.174 [2024-06-11 13:13:10.808206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138902 ] 00:27:52.174 [2024-06-11 13:13:10.975979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.432 [2024-06-11 13:13:11.148810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.691 [2024-06-11 13:13:11.398731] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:52.691 [2024-06-11 13:13:11.399094] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:52.691 [2024-06-11 13:13:11.399154] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:53.258 [2024-06-11 13:13:11.989219] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:53.515 13:13:12 -- common/autotest_common.sh@643 -- # es=216 00:27:53.515 13:13:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:53.515 13:13:12 -- common/autotest_common.sh@652 -- # es=88 00:27:53.515 13:13:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:53.515 13:13:12 -- common/autotest_common.sh@660 -- # es=1 00:27:53.515 13:13:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:53.515 13:13:12 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:53.515 13:13:12 -- dd/common.sh@98 -- # xtrace_disable 00:27:53.515 13:13:12 -- common/autotest_common.sh@10 -- # set +x 00:27:53.515 13:13:12 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:53.773 [2024-06-11 13:13:12.407614] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:53.773 [2024-06-11 13:13:12.408044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138917 ] 00:27:53.773 [2024-06-11 13:13:12.576204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.031 [2024-06-11 13:13:12.767909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.223  Copying: 512/512 [B] (average 500 kBps) 00:27:55.223 00:27:55.223 ************************************ 00:27:55.223 END TEST dd_flag_nofollow_forced_aio 00:27:55.223 ************************************ 00:27:55.223 13:13:14 -- dd/posix.sh@49 -- # [[ 9k9p01rc8fpjryuyjfvc7n3tns5b1b8s6plsxqyjeub2o8m74hr2s3omftjk7rerjgn2jfx5bz2ycvpnegsyr9yxfnfeqzzph8vxi9e525zmgz31i4nrjghqxtyzgti4akbvxicaef55cbxb2ccb3dvds3512wqim75mq2sgy2gzscs4twlyw66etaq5bp4dkp5y79isq2avlx0qs0cpj8kzmsvwec9kqwp0e72yd8dyesgcx14jwxi7hlg767hmh8qx0e734kasnx4m42i43kdqvlyiaff2crl9qvxgecu3kgoz2qe6k4ay9pb9we18jidso8wte9il10xanj05iej7ron9g4u0bt5qhedbkycq0zsvuasx7r3xqdxufvwa3i7aufsklt4ksh429bj4guvqk3g2wk104vb7vlwryoz2e66qc1ctovalocrmjyuf9rmn1heqtn1si1eaoc1m17qq7lqtvtirrbqm3cr5rb6gj4kfsirke8yq6z5l94v3 == \9\k\9\p\0\1\r\c\8\f\p\j\r\y\u\y\j\f\v\c\7\n\3\t\n\s\5\b\1\b\8\s\6\p\l\s\x\q\y\j\e\u\b\2\o\8\m\7\4\h\r\2\s\3\o\m\f\t\j\k\7\r\e\r\j\g\n\2\j\f\x\5\b\z\2\y\c\v\p\n\e\g\s\y\r\9\y\x\f\n\f\e\q\z\z\p\h\8\v\x\i\9\e\5\2\5\z\m\g\z\3\1\i\4\n\r\j\g\h\q\x\t\y\z\g\t\i\4\a\k\b\v\x\i\c\a\e\f\5\5\c\b\x\b\2\c\c\b\3\d\v\d\s\3\5\1\2\w\q\i\m\7\5\m\q\2\s\g\y\2\g\z\s\c\s\4\t\w\l\y\w\6\6\e\t\a\q\5\b\p\4\d\k\p\5\y\7\9\i\s\q\2\a\v\l\x\0\q\s\0\c\p\j\8\k\z\m\s\v\w\e\c\9\k\q\w\p\0\e\7\2\y\d\8\d\y\e\s\g\c\x\1\4\j\w\x\i\7\h\l\g\7\6\7\h\m\h\8\q\x\0\e\7\3\4\k\a\s\n\x\4\m\4\2\i\4\3\k\d\q\v\l\y\i\a\f\f\2\c\r\l\9\q\v\x\g\e\c\u\3\k\g\o\z\2\q\e\6\k\4\a\y\9\p\b\9\w\e\1\8\j\i\d\s\o\8\w\t\e\9\i\l\1\0\x\a\n\j\0\5\i\e\j\7\r\o\n\9\g\4\u\0\b\t\5\q\h\e\d\b\k\y\c\q\0\z\s\v\u\a\s\x\7\r\3\x\q\d\x\u\f\v\w\a\3\i\7\a\u\f\s\k\l\t\4\k\s\h\4\2\9\b\j\4\g\u\v\q\k\3\g\2\w\k\1\0\4\v\b\7\v\l\w\r\y\o\z\2\e\6\6\q\c\1\c\t\o\v\a\l\o\c\r\m\j\y\u\f\9\r\m\n\1\h\e\q\t\n\1\s\i\1\e\a\o\c\1\m\1\7\q\q\7\l\q\t\v\t\i\r\r\b\q\m\3\c\r\5\r\b\6\g\j\4\k\f\s\i\r\k\e\8\y\q\6\z\5\l\9\4\v\3 ]] 00:27:55.223 00:27:55.223 real 0m4.935s 00:27:55.223 user 0m3.901s 00:27:55.223 sys 0m0.689s 00:27:55.223 13:13:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.223 13:13:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.223 13:13:14 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:27:55.223 13:13:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:55.223 13:13:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.223 13:13:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.481 ************************************ 00:27:55.482 START TEST dd_flag_noatime_forced_aio 00:27:55.482 ************************************ 00:27:55.482 13:13:14 -- common/autotest_common.sh@1104 -- # noatime 00:27:55.482 13:13:14 -- dd/posix.sh@53 -- # local atime_if 00:27:55.482 13:13:14 -- dd/posix.sh@54 -- # local atime_of 00:27:55.482 13:13:14 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:55.482 13:13:14 -- dd/common.sh@98 -- # xtrace_disable 00:27:55.482 13:13:14 -- common/autotest_common.sh@10 -- # set +x 00:27:55.482 13:13:14 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:55.482 13:13:14 -- dd/posix.sh@60 -- # atime_if=1718111593 00:27:55.482 13:13:14 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:55.482 13:13:14 -- dd/posix.sh@61 -- # atime_of=1718111594 00:27:55.482 13:13:14 -- dd/posix.sh@66 -- # sleep 1 00:27:56.415 13:13:15 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:56.415 [2024-06-11 13:13:15.158602] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:56.415 [2024-06-11 13:13:15.159007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138981 ] 00:27:56.674 [2024-06-11 13:13:15.324208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.674 [2024-06-11 13:13:15.495832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.314  Copying: 512/512 [B] (average 500 kBps) 00:27:58.314 00:27:58.314 13:13:16 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:58.314 13:13:16 -- dd/posix.sh@69 -- # (( atime_if == 1718111593 )) 00:27:58.314 13:13:16 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:58.314 13:13:16 -- dd/posix.sh@70 -- # (( atime_of == 1718111594 )) 00:27:58.314 13:13:16 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:58.314 [2024-06-11 13:13:16.817002] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:58.314 [2024-06-11 13:13:16.817488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139027 ] 00:27:58.314 [2024-06-11 13:13:16.985553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.584 [2024-06-11 13:13:17.172334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.780  Copying: 512/512 [B] (average 500 kBps) 00:27:59.780 00:27:59.780 13:13:18 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:59.780 ************************************ 00:27:59.780 END TEST dd_flag_noatime_forced_aio 00:27:59.780 ************************************ 00:27:59.780 13:13:18 -- dd/posix.sh@73 -- # (( atime_if < 1718111597 )) 00:27:59.780 00:27:59.780 real 0m4.344s 00:27:59.780 user 0m2.622s 00:27:59.780 sys 0m0.447s 00:27:59.780 13:13:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.780 13:13:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.780 13:13:18 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:27:59.780 13:13:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:59.780 13:13:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:59.780 13:13:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.780 ************************************ 00:27:59.780 START TEST dd_flags_misc_forced_aio 00:27:59.780 ************************************ 00:27:59.780 13:13:18 -- common/autotest_common.sh@1104 -- # io 00:27:59.780 13:13:18 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:59.780 13:13:18 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:59.780 13:13:18 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:59.780 13:13:18 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:59.780 13:13:18 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:59.780 13:13:18 -- dd/common.sh@98 -- # xtrace_disable 00:27:59.780 13:13:18 -- common/autotest_common.sh@10 -- # set +x 00:27:59.780 13:13:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:59.780 13:13:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:59.780 [2024-06-11 13:13:18.535074] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:59.780 [2024-06-11 13:13:18.535406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139068 ] 00:28:00.039 [2024-06-11 13:13:18.685105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.039 [2024-06-11 13:13:18.849558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.235  Copying: 512/512 [B] (average 500 kBps) 00:28:01.235 00:28:01.495 13:13:20 -- dd/posix.sh@93 -- # [[ iv8zou4k47lpdao44lf8qw60l2rnfoi8vb08a5s0438m561lccgei9yzml1fzsm7itm4zr9k3q4xekmtmvosd4u92ag15bhcpprpqvec9yuac8r1okzttjgu4ohwufbvjcmqkcuqs69ovtzjf5eyhgx5ppp7qnwj4nrlv3b6vfmh3hrwnupnlbtac9ibnenc37g06orq336r75dnrao3jc4oe5jqi8kk8ac91rlgw8hv7mbx41q6mm2cldpe2dkpjcc5ue4vgmfb06ipmnupyfj8y1bokm4wxmn65pdl5kbw79fn5t0vovf3xy074tovmb7j1hzu19raxta9cyfaq45e62132y2efwij30ga016iyo2fqzxbui2q77b5dn8oe12pek2w0doq4a2odbqlj54vzwesdvrhegupcwlsjofcjzl7x0aitup5dyhyq3nk9bfebreeubi5lzbnsqlj7ude39z6bzr9jlv5kghw30ry830ou4jr8a8jtzfn8jwh == \i\v\8\z\o\u\4\k\4\7\l\p\d\a\o\4\4\l\f\8\q\w\6\0\l\2\r\n\f\o\i\8\v\b\0\8\a\5\s\0\4\3\8\m\5\6\1\l\c\c\g\e\i\9\y\z\m\l\1\f\z\s\m\7\i\t\m\4\z\r\9\k\3\q\4\x\e\k\m\t\m\v\o\s\d\4\u\9\2\a\g\1\5\b\h\c\p\p\r\p\q\v\e\c\9\y\u\a\c\8\r\1\o\k\z\t\t\j\g\u\4\o\h\w\u\f\b\v\j\c\m\q\k\c\u\q\s\6\9\o\v\t\z\j\f\5\e\y\h\g\x\5\p\p\p\7\q\n\w\j\4\n\r\l\v\3\b\6\v\f\m\h\3\h\r\w\n\u\p\n\l\b\t\a\c\9\i\b\n\e\n\c\3\7\g\0\6\o\r\q\3\3\6\r\7\5\d\n\r\a\o\3\j\c\4\o\e\5\j\q\i\8\k\k\8\a\c\9\1\r\l\g\w\8\h\v\7\m\b\x\4\1\q\6\m\m\2\c\l\d\p\e\2\d\k\p\j\c\c\5\u\e\4\v\g\m\f\b\0\6\i\p\m\n\u\p\y\f\j\8\y\1\b\o\k\m\4\w\x\m\n\6\5\p\d\l\5\k\b\w\7\9\f\n\5\t\0\v\o\v\f\3\x\y\0\7\4\t\o\v\m\b\7\j\1\h\z\u\1\9\r\a\x\t\a\9\c\y\f\a\q\4\5\e\6\2\1\3\2\y\2\e\f\w\i\j\3\0\g\a\0\1\6\i\y\o\2\f\q\z\x\b\u\i\2\q\7\7\b\5\d\n\8\o\e\1\2\p\e\k\2\w\0\d\o\q\4\a\2\o\d\b\q\l\j\5\4\v\z\w\e\s\d\v\r\h\e\g\u\p\c\w\l\s\j\o\f\c\j\z\l\7\x\0\a\i\t\u\p\5\d\y\h\y\q\3\n\k\9\b\f\e\b\r\e\e\u\b\i\5\l\z\b\n\s\q\l\j\7\u\d\e\3\9\z\6\b\z\r\9\j\l\v\5\k\g\h\w\3\0\r\y\8\3\0\o\u\4\j\r\8\a\8\j\t\z\f\n\8\j\w\h ]] 00:28:01.495 13:13:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:01.495 13:13:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:01.495 [2024-06-11 13:13:20.146113] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:01.495 [2024-06-11 13:13:20.146549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139096 ] 00:28:01.495 [2024-06-11 13:13:20.311452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.753 [2024-06-11 13:13:20.477798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.947  Copying: 512/512 [B] (average 500 kBps) 00:28:02.947 00:28:02.948 13:13:21 -- dd/posix.sh@93 -- # [[ iv8zou4k47lpdao44lf8qw60l2rnfoi8vb08a5s0438m561lccgei9yzml1fzsm7itm4zr9k3q4xekmtmvosd4u92ag15bhcpprpqvec9yuac8r1okzttjgu4ohwufbvjcmqkcuqs69ovtzjf5eyhgx5ppp7qnwj4nrlv3b6vfmh3hrwnupnlbtac9ibnenc37g06orq336r75dnrao3jc4oe5jqi8kk8ac91rlgw8hv7mbx41q6mm2cldpe2dkpjcc5ue4vgmfb06ipmnupyfj8y1bokm4wxmn65pdl5kbw79fn5t0vovf3xy074tovmb7j1hzu19raxta9cyfaq45e62132y2efwij30ga016iyo2fqzxbui2q77b5dn8oe12pek2w0doq4a2odbqlj54vzwesdvrhegupcwlsjofcjzl7x0aitup5dyhyq3nk9bfebreeubi5lzbnsqlj7ude39z6bzr9jlv5kghw30ry830ou4jr8a8jtzfn8jwh == \i\v\8\z\o\u\4\k\4\7\l\p\d\a\o\4\4\l\f\8\q\w\6\0\l\2\r\n\f\o\i\8\v\b\0\8\a\5\s\0\4\3\8\m\5\6\1\l\c\c\g\e\i\9\y\z\m\l\1\f\z\s\m\7\i\t\m\4\z\r\9\k\3\q\4\x\e\k\m\t\m\v\o\s\d\4\u\9\2\a\g\1\5\b\h\c\p\p\r\p\q\v\e\c\9\y\u\a\c\8\r\1\o\k\z\t\t\j\g\u\4\o\h\w\u\f\b\v\j\c\m\q\k\c\u\q\s\6\9\o\v\t\z\j\f\5\e\y\h\g\x\5\p\p\p\7\q\n\w\j\4\n\r\l\v\3\b\6\v\f\m\h\3\h\r\w\n\u\p\n\l\b\t\a\c\9\i\b\n\e\n\c\3\7\g\0\6\o\r\q\3\3\6\r\7\5\d\n\r\a\o\3\j\c\4\o\e\5\j\q\i\8\k\k\8\a\c\9\1\r\l\g\w\8\h\v\7\m\b\x\4\1\q\6\m\m\2\c\l\d\p\e\2\d\k\p\j\c\c\5\u\e\4\v\g\m\f\b\0\6\i\p\m\n\u\p\y\f\j\8\y\1\b\o\k\m\4\w\x\m\n\6\5\p\d\l\5\k\b\w\7\9\f\n\5\t\0\v\o\v\f\3\x\y\0\7\4\t\o\v\m\b\7\j\1\h\z\u\1\9\r\a\x\t\a\9\c\y\f\a\q\4\5\e\6\2\1\3\2\y\2\e\f\w\i\j\3\0\g\a\0\1\6\i\y\o\2\f\q\z\x\b\u\i\2\q\7\7\b\5\d\n\8\o\e\1\2\p\e\k\2\w\0\d\o\q\4\a\2\o\d\b\q\l\j\5\4\v\z\w\e\s\d\v\r\h\e\g\u\p\c\w\l\s\j\o\f\c\j\z\l\7\x\0\a\i\t\u\p\5\d\y\h\y\q\3\n\k\9\b\f\e\b\r\e\e\u\b\i\5\l\z\b\n\s\q\l\j\7\u\d\e\3\9\z\6\b\z\r\9\j\l\v\5\k\g\h\w\3\0\r\y\8\3\0\o\u\4\j\r\8\a\8\j\t\z\f\n\8\j\w\h ]] 00:28:02.948 13:13:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:02.948 13:13:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:02.948 [2024-06-11 13:13:21.778562] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:02.948 [2024-06-11 13:13:21.778950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139120 ] 00:28:03.206 [2024-06-11 13:13:21.946685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.463 [2024-06-11 13:13:22.125411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.651  Copying: 512/512 [B] (average 250 kBps) 00:28:04.651 00:28:04.651 13:13:23 -- dd/posix.sh@93 -- # [[ iv8zou4k47lpdao44lf8qw60l2rnfoi8vb08a5s0438m561lccgei9yzml1fzsm7itm4zr9k3q4xekmtmvosd4u92ag15bhcpprpqvec9yuac8r1okzttjgu4ohwufbvjcmqkcuqs69ovtzjf5eyhgx5ppp7qnwj4nrlv3b6vfmh3hrwnupnlbtac9ibnenc37g06orq336r75dnrao3jc4oe5jqi8kk8ac91rlgw8hv7mbx41q6mm2cldpe2dkpjcc5ue4vgmfb06ipmnupyfj8y1bokm4wxmn65pdl5kbw79fn5t0vovf3xy074tovmb7j1hzu19raxta9cyfaq45e62132y2efwij30ga016iyo2fqzxbui2q77b5dn8oe12pek2w0doq4a2odbqlj54vzwesdvrhegupcwlsjofcjzl7x0aitup5dyhyq3nk9bfebreeubi5lzbnsqlj7ude39z6bzr9jlv5kghw30ry830ou4jr8a8jtzfn8jwh == \i\v\8\z\o\u\4\k\4\7\l\p\d\a\o\4\4\l\f\8\q\w\6\0\l\2\r\n\f\o\i\8\v\b\0\8\a\5\s\0\4\3\8\m\5\6\1\l\c\c\g\e\i\9\y\z\m\l\1\f\z\s\m\7\i\t\m\4\z\r\9\k\3\q\4\x\e\k\m\t\m\v\o\s\d\4\u\9\2\a\g\1\5\b\h\c\p\p\r\p\q\v\e\c\9\y\u\a\c\8\r\1\o\k\z\t\t\j\g\u\4\o\h\w\u\f\b\v\j\c\m\q\k\c\u\q\s\6\9\o\v\t\z\j\f\5\e\y\h\g\x\5\p\p\p\7\q\n\w\j\4\n\r\l\v\3\b\6\v\f\m\h\3\h\r\w\n\u\p\n\l\b\t\a\c\9\i\b\n\e\n\c\3\7\g\0\6\o\r\q\3\3\6\r\7\5\d\n\r\a\o\3\j\c\4\o\e\5\j\q\i\8\k\k\8\a\c\9\1\r\l\g\w\8\h\v\7\m\b\x\4\1\q\6\m\m\2\c\l\d\p\e\2\d\k\p\j\c\c\5\u\e\4\v\g\m\f\b\0\6\i\p\m\n\u\p\y\f\j\8\y\1\b\o\k\m\4\w\x\m\n\6\5\p\d\l\5\k\b\w\7\9\f\n\5\t\0\v\o\v\f\3\x\y\0\7\4\t\o\v\m\b\7\j\1\h\z\u\1\9\r\a\x\t\a\9\c\y\f\a\q\4\5\e\6\2\1\3\2\y\2\e\f\w\i\j\3\0\g\a\0\1\6\i\y\o\2\f\q\z\x\b\u\i\2\q\7\7\b\5\d\n\8\o\e\1\2\p\e\k\2\w\0\d\o\q\4\a\2\o\d\b\q\l\j\5\4\v\z\w\e\s\d\v\r\h\e\g\u\p\c\w\l\s\j\o\f\c\j\z\l\7\x\0\a\i\t\u\p\5\d\y\h\y\q\3\n\k\9\b\f\e\b\r\e\e\u\b\i\5\l\z\b\n\s\q\l\j\7\u\d\e\3\9\z\6\b\z\r\9\j\l\v\5\k\g\h\w\3\0\r\y\8\3\0\o\u\4\j\r\8\a\8\j\t\z\f\n\8\j\w\h ]] 00:28:04.651 13:13:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:04.651 13:13:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:04.651 [2024-06-11 13:13:23.433624] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:04.651 [2024-06-11 13:13:23.434020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139141 ] 00:28:04.908 [2024-06-11 13:13:23.599284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.173 [2024-06-11 13:13:23.764688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.361  Copying: 512/512 [B] (average 166 kBps) 00:28:06.361 00:28:06.361 13:13:25 -- dd/posix.sh@93 -- # [[ iv8zou4k47lpdao44lf8qw60l2rnfoi8vb08a5s0438m561lccgei9yzml1fzsm7itm4zr9k3q4xekmtmvosd4u92ag15bhcpprpqvec9yuac8r1okzttjgu4ohwufbvjcmqkcuqs69ovtzjf5eyhgx5ppp7qnwj4nrlv3b6vfmh3hrwnupnlbtac9ibnenc37g06orq336r75dnrao3jc4oe5jqi8kk8ac91rlgw8hv7mbx41q6mm2cldpe2dkpjcc5ue4vgmfb06ipmnupyfj8y1bokm4wxmn65pdl5kbw79fn5t0vovf3xy074tovmb7j1hzu19raxta9cyfaq45e62132y2efwij30ga016iyo2fqzxbui2q77b5dn8oe12pek2w0doq4a2odbqlj54vzwesdvrhegupcwlsjofcjzl7x0aitup5dyhyq3nk9bfebreeubi5lzbnsqlj7ude39z6bzr9jlv5kghw30ry830ou4jr8a8jtzfn8jwh == \i\v\8\z\o\u\4\k\4\7\l\p\d\a\o\4\4\l\f\8\q\w\6\0\l\2\r\n\f\o\i\8\v\b\0\8\a\5\s\0\4\3\8\m\5\6\1\l\c\c\g\e\i\9\y\z\m\l\1\f\z\s\m\7\i\t\m\4\z\r\9\k\3\q\4\x\e\k\m\t\m\v\o\s\d\4\u\9\2\a\g\1\5\b\h\c\p\p\r\p\q\v\e\c\9\y\u\a\c\8\r\1\o\k\z\t\t\j\g\u\4\o\h\w\u\f\b\v\j\c\m\q\k\c\u\q\s\6\9\o\v\t\z\j\f\5\e\y\h\g\x\5\p\p\p\7\q\n\w\j\4\n\r\l\v\3\b\6\v\f\m\h\3\h\r\w\n\u\p\n\l\b\t\a\c\9\i\b\n\e\n\c\3\7\g\0\6\o\r\q\3\3\6\r\7\5\d\n\r\a\o\3\j\c\4\o\e\5\j\q\i\8\k\k\8\a\c\9\1\r\l\g\w\8\h\v\7\m\b\x\4\1\q\6\m\m\2\c\l\d\p\e\2\d\k\p\j\c\c\5\u\e\4\v\g\m\f\b\0\6\i\p\m\n\u\p\y\f\j\8\y\1\b\o\k\m\4\w\x\m\n\6\5\p\d\l\5\k\b\w\7\9\f\n\5\t\0\v\o\v\f\3\x\y\0\7\4\t\o\v\m\b\7\j\1\h\z\u\1\9\r\a\x\t\a\9\c\y\f\a\q\4\5\e\6\2\1\3\2\y\2\e\f\w\i\j\3\0\g\a\0\1\6\i\y\o\2\f\q\z\x\b\u\i\2\q\7\7\b\5\d\n\8\o\e\1\2\p\e\k\2\w\0\d\o\q\4\a\2\o\d\b\q\l\j\5\4\v\z\w\e\s\d\v\r\h\e\g\u\p\c\w\l\s\j\o\f\c\j\z\l\7\x\0\a\i\t\u\p\5\d\y\h\y\q\3\n\k\9\b\f\e\b\r\e\e\u\b\i\5\l\z\b\n\s\q\l\j\7\u\d\e\3\9\z\6\b\z\r\9\j\l\v\5\k\g\h\w\3\0\r\y\8\3\0\o\u\4\j\r\8\a\8\j\t\z\f\n\8\j\w\h ]] 00:28:06.361 13:13:25 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:06.361 13:13:25 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:06.361 13:13:25 -- dd/common.sh@98 -- # xtrace_disable 00:28:06.361 13:13:25 -- common/autotest_common.sh@10 -- # set +x 00:28:06.361 13:13:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:06.361 13:13:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:06.361 [2024-06-11 13:13:25.069542] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:06.361 [2024-06-11 13:13:25.069978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139166 ] 00:28:06.619 [2024-06-11 13:13:25.234128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.619 [2024-06-11 13:13:25.389138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.809  Copying: 512/512 [B] (average 500 kBps) 00:28:07.809 00:28:08.067 13:13:26 -- dd/posix.sh@93 -- # [[ aqgm3vzs87bkahh3jipgyhfehp3c5bues36tklw7efa7icmgf1fjftvyrt6f0dc8n69jnd60s7mv1l7gszdjb4pxa29sta2turr1okuledvsjghohro8rcwkc2q2lsfktrj1y5yt6y0ilibvm511buyc6qrq9joatlh8dkpdep72zkq16nc8mfd4npv0jtn1ppebfddcrfmvkdhe3a5v4for81t05nnv4pe1u84ubvh1b4qe8p4vrx70siy0oktu8xz8w15ir01euyakigojgd4kgg50bxcfp2dk6ir8wnlbdwvbn16in2d467c9j8ew71l8rx8pqbvy3w1jafau5giblg7s4nj75krzadq24ox7e8xibjubjr5f0hxjts2forkbya7ealiefcgawbjqa65b5fnvz4509wc0bye1nw57koylzjenw1twjg3fjhib82jg6pa64bjwii65d44jghjmyndlx7z59al5kull9h6dfg37q225nnqcjwwo3nta == \a\q\g\m\3\v\z\s\8\7\b\k\a\h\h\3\j\i\p\g\y\h\f\e\h\p\3\c\5\b\u\e\s\3\6\t\k\l\w\7\e\f\a\7\i\c\m\g\f\1\f\j\f\t\v\y\r\t\6\f\0\d\c\8\n\6\9\j\n\d\6\0\s\7\m\v\1\l\7\g\s\z\d\j\b\4\p\x\a\2\9\s\t\a\2\t\u\r\r\1\o\k\u\l\e\d\v\s\j\g\h\o\h\r\o\8\r\c\w\k\c\2\q\2\l\s\f\k\t\r\j\1\y\5\y\t\6\y\0\i\l\i\b\v\m\5\1\1\b\u\y\c\6\q\r\q\9\j\o\a\t\l\h\8\d\k\p\d\e\p\7\2\z\k\q\1\6\n\c\8\m\f\d\4\n\p\v\0\j\t\n\1\p\p\e\b\f\d\d\c\r\f\m\v\k\d\h\e\3\a\5\v\4\f\o\r\8\1\t\0\5\n\n\v\4\p\e\1\u\8\4\u\b\v\h\1\b\4\q\e\8\p\4\v\r\x\7\0\s\i\y\0\o\k\t\u\8\x\z\8\w\1\5\i\r\0\1\e\u\y\a\k\i\g\o\j\g\d\4\k\g\g\5\0\b\x\c\f\p\2\d\k\6\i\r\8\w\n\l\b\d\w\v\b\n\1\6\i\n\2\d\4\6\7\c\9\j\8\e\w\7\1\l\8\r\x\8\p\q\b\v\y\3\w\1\j\a\f\a\u\5\g\i\b\l\g\7\s\4\n\j\7\5\k\r\z\a\d\q\2\4\o\x\7\e\8\x\i\b\j\u\b\j\r\5\f\0\h\x\j\t\s\2\f\o\r\k\b\y\a\7\e\a\l\i\e\f\c\g\a\w\b\j\q\a\6\5\b\5\f\n\v\z\4\5\0\9\w\c\0\b\y\e\1\n\w\5\7\k\o\y\l\z\j\e\n\w\1\t\w\j\g\3\f\j\h\i\b\8\2\j\g\6\p\a\6\4\b\j\w\i\i\6\5\d\4\4\j\g\h\j\m\y\n\d\l\x\7\z\5\9\a\l\5\k\u\l\l\9\h\6\d\f\g\3\7\q\2\2\5\n\n\q\c\j\w\w\o\3\n\t\a ]] 00:28:08.067 13:13:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:08.067 13:13:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:08.067 [2024-06-11 13:13:26.704521] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:08.067 [2024-06-11 13:13:26.704821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139202 ] 00:28:08.067 [2024-06-11 13:13:26.862413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.326 [2024-06-11 13:13:27.093657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.544  Copying: 512/512 [B] (average 500 kBps) 00:28:09.544 00:28:09.544 13:13:28 -- dd/posix.sh@93 -- # [[ aqgm3vzs87bkahh3jipgyhfehp3c5bues36tklw7efa7icmgf1fjftvyrt6f0dc8n69jnd60s7mv1l7gszdjb4pxa29sta2turr1okuledvsjghohro8rcwkc2q2lsfktrj1y5yt6y0ilibvm511buyc6qrq9joatlh8dkpdep72zkq16nc8mfd4npv0jtn1ppebfddcrfmvkdhe3a5v4for81t05nnv4pe1u84ubvh1b4qe8p4vrx70siy0oktu8xz8w15ir01euyakigojgd4kgg50bxcfp2dk6ir8wnlbdwvbn16in2d467c9j8ew71l8rx8pqbvy3w1jafau5giblg7s4nj75krzadq24ox7e8xibjubjr5f0hxjts2forkbya7ealiefcgawbjqa65b5fnvz4509wc0bye1nw57koylzjenw1twjg3fjhib82jg6pa64bjwii65d44jghjmyndlx7z59al5kull9h6dfg37q225nnqcjwwo3nta == \a\q\g\m\3\v\z\s\8\7\b\k\a\h\h\3\j\i\p\g\y\h\f\e\h\p\3\c\5\b\u\e\s\3\6\t\k\l\w\7\e\f\a\7\i\c\m\g\f\1\f\j\f\t\v\y\r\t\6\f\0\d\c\8\n\6\9\j\n\d\6\0\s\7\m\v\1\l\7\g\s\z\d\j\b\4\p\x\a\2\9\s\t\a\2\t\u\r\r\1\o\k\u\l\e\d\v\s\j\g\h\o\h\r\o\8\r\c\w\k\c\2\q\2\l\s\f\k\t\r\j\1\y\5\y\t\6\y\0\i\l\i\b\v\m\5\1\1\b\u\y\c\6\q\r\q\9\j\o\a\t\l\h\8\d\k\p\d\e\p\7\2\z\k\q\1\6\n\c\8\m\f\d\4\n\p\v\0\j\t\n\1\p\p\e\b\f\d\d\c\r\f\m\v\k\d\h\e\3\a\5\v\4\f\o\r\8\1\t\0\5\n\n\v\4\p\e\1\u\8\4\u\b\v\h\1\b\4\q\e\8\p\4\v\r\x\7\0\s\i\y\0\o\k\t\u\8\x\z\8\w\1\5\i\r\0\1\e\u\y\a\k\i\g\o\j\g\d\4\k\g\g\5\0\b\x\c\f\p\2\d\k\6\i\r\8\w\n\l\b\d\w\v\b\n\1\6\i\n\2\d\4\6\7\c\9\j\8\e\w\7\1\l\8\r\x\8\p\q\b\v\y\3\w\1\j\a\f\a\u\5\g\i\b\l\g\7\s\4\n\j\7\5\k\r\z\a\d\q\2\4\o\x\7\e\8\x\i\b\j\u\b\j\r\5\f\0\h\x\j\t\s\2\f\o\r\k\b\y\a\7\e\a\l\i\e\f\c\g\a\w\b\j\q\a\6\5\b\5\f\n\v\z\4\5\0\9\w\c\0\b\y\e\1\n\w\5\7\k\o\y\l\z\j\e\n\w\1\t\w\j\g\3\f\j\h\i\b\8\2\j\g\6\p\a\6\4\b\j\w\i\i\6\5\d\4\4\j\g\h\j\m\y\n\d\l\x\7\z\5\9\a\l\5\k\u\l\l\9\h\6\d\f\g\3\7\q\2\2\5\n\n\q\c\j\w\w\o\3\n\t\a ]] 00:28:09.544 13:13:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:09.544 13:13:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:09.863 [2024-06-11 13:13:28.380975] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:09.863 [2024-06-11 13:13:28.381353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139230 ] 00:28:09.863 [2024-06-11 13:13:28.532373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.863 [2024-06-11 13:13:28.697405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.499  Copying: 512/512 [B] (average 125 kBps) 00:28:11.499 00:28:11.499 13:13:29 -- dd/posix.sh@93 -- # [[ aqgm3vzs87bkahh3jipgyhfehp3c5bues36tklw7efa7icmgf1fjftvyrt6f0dc8n69jnd60s7mv1l7gszdjb4pxa29sta2turr1okuledvsjghohro8rcwkc2q2lsfktrj1y5yt6y0ilibvm511buyc6qrq9joatlh8dkpdep72zkq16nc8mfd4npv0jtn1ppebfddcrfmvkdhe3a5v4for81t05nnv4pe1u84ubvh1b4qe8p4vrx70siy0oktu8xz8w15ir01euyakigojgd4kgg50bxcfp2dk6ir8wnlbdwvbn16in2d467c9j8ew71l8rx8pqbvy3w1jafau5giblg7s4nj75krzadq24ox7e8xibjubjr5f0hxjts2forkbya7ealiefcgawbjqa65b5fnvz4509wc0bye1nw57koylzjenw1twjg3fjhib82jg6pa64bjwii65d44jghjmyndlx7z59al5kull9h6dfg37q225nnqcjwwo3nta == \a\q\g\m\3\v\z\s\8\7\b\k\a\h\h\3\j\i\p\g\y\h\f\e\h\p\3\c\5\b\u\e\s\3\6\t\k\l\w\7\e\f\a\7\i\c\m\g\f\1\f\j\f\t\v\y\r\t\6\f\0\d\c\8\n\6\9\j\n\d\6\0\s\7\m\v\1\l\7\g\s\z\d\j\b\4\p\x\a\2\9\s\t\a\2\t\u\r\r\1\o\k\u\l\e\d\v\s\j\g\h\o\h\r\o\8\r\c\w\k\c\2\q\2\l\s\f\k\t\r\j\1\y\5\y\t\6\y\0\i\l\i\b\v\m\5\1\1\b\u\y\c\6\q\r\q\9\j\o\a\t\l\h\8\d\k\p\d\e\p\7\2\z\k\q\1\6\n\c\8\m\f\d\4\n\p\v\0\j\t\n\1\p\p\e\b\f\d\d\c\r\f\m\v\k\d\h\e\3\a\5\v\4\f\o\r\8\1\t\0\5\n\n\v\4\p\e\1\u\8\4\u\b\v\h\1\b\4\q\e\8\p\4\v\r\x\7\0\s\i\y\0\o\k\t\u\8\x\z\8\w\1\5\i\r\0\1\e\u\y\a\k\i\g\o\j\g\d\4\k\g\g\5\0\b\x\c\f\p\2\d\k\6\i\r\8\w\n\l\b\d\w\v\b\n\1\6\i\n\2\d\4\6\7\c\9\j\8\e\w\7\1\l\8\r\x\8\p\q\b\v\y\3\w\1\j\a\f\a\u\5\g\i\b\l\g\7\s\4\n\j\7\5\k\r\z\a\d\q\2\4\o\x\7\e\8\x\i\b\j\u\b\j\r\5\f\0\h\x\j\t\s\2\f\o\r\k\b\y\a\7\e\a\l\i\e\f\c\g\a\w\b\j\q\a\6\5\b\5\f\n\v\z\4\5\0\9\w\c\0\b\y\e\1\n\w\5\7\k\o\y\l\z\j\e\n\w\1\t\w\j\g\3\f\j\h\i\b\8\2\j\g\6\p\a\6\4\b\j\w\i\i\6\5\d\4\4\j\g\h\j\m\y\n\d\l\x\7\z\5\9\a\l\5\k\u\l\l\9\h\6\d\f\g\3\7\q\2\2\5\n\n\q\c\j\w\w\o\3\n\t\a ]] 00:28:11.499 13:13:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:11.499 13:13:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:11.499 [2024-06-11 13:13:29.995490] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:11.499 [2024-06-11 13:13:29.995906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139255 ] 00:28:11.499 [2024-06-11 13:13:30.162935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.499 [2024-06-11 13:13:30.331551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.136  Copying: 512/512 [B] (average 250 kBps) 00:28:13.136 00:28:13.136 ************************************ 00:28:13.136 END TEST dd_flags_misc_forced_aio 00:28:13.136 ************************************ 00:28:13.136 13:13:31 -- dd/posix.sh@93 -- # [[ aqgm3vzs87bkahh3jipgyhfehp3c5bues36tklw7efa7icmgf1fjftvyrt6f0dc8n69jnd60s7mv1l7gszdjb4pxa29sta2turr1okuledvsjghohro8rcwkc2q2lsfktrj1y5yt6y0ilibvm511buyc6qrq9joatlh8dkpdep72zkq16nc8mfd4npv0jtn1ppebfddcrfmvkdhe3a5v4for81t05nnv4pe1u84ubvh1b4qe8p4vrx70siy0oktu8xz8w15ir01euyakigojgd4kgg50bxcfp2dk6ir8wnlbdwvbn16in2d467c9j8ew71l8rx8pqbvy3w1jafau5giblg7s4nj75krzadq24ox7e8xibjubjr5f0hxjts2forkbya7ealiefcgawbjqa65b5fnvz4509wc0bye1nw57koylzjenw1twjg3fjhib82jg6pa64bjwii65d44jghjmyndlx7z59al5kull9h6dfg37q225nnqcjwwo3nta == \a\q\g\m\3\v\z\s\8\7\b\k\a\h\h\3\j\i\p\g\y\h\f\e\h\p\3\c\5\b\u\e\s\3\6\t\k\l\w\7\e\f\a\7\i\c\m\g\f\1\f\j\f\t\v\y\r\t\6\f\0\d\c\8\n\6\9\j\n\d\6\0\s\7\m\v\1\l\7\g\s\z\d\j\b\4\p\x\a\2\9\s\t\a\2\t\u\r\r\1\o\k\u\l\e\d\v\s\j\g\h\o\h\r\o\8\r\c\w\k\c\2\q\2\l\s\f\k\t\r\j\1\y\5\y\t\6\y\0\i\l\i\b\v\m\5\1\1\b\u\y\c\6\q\r\q\9\j\o\a\t\l\h\8\d\k\p\d\e\p\7\2\z\k\q\1\6\n\c\8\m\f\d\4\n\p\v\0\j\t\n\1\p\p\e\b\f\d\d\c\r\f\m\v\k\d\h\e\3\a\5\v\4\f\o\r\8\1\t\0\5\n\n\v\4\p\e\1\u\8\4\u\b\v\h\1\b\4\q\e\8\p\4\v\r\x\7\0\s\i\y\0\o\k\t\u\8\x\z\8\w\1\5\i\r\0\1\e\u\y\a\k\i\g\o\j\g\d\4\k\g\g\5\0\b\x\c\f\p\2\d\k\6\i\r\8\w\n\l\b\d\w\v\b\n\1\6\i\n\2\d\4\6\7\c\9\j\8\e\w\7\1\l\8\r\x\8\p\q\b\v\y\3\w\1\j\a\f\a\u\5\g\i\b\l\g\7\s\4\n\j\7\5\k\r\z\a\d\q\2\4\o\x\7\e\8\x\i\b\j\u\b\j\r\5\f\0\h\x\j\t\s\2\f\o\r\k\b\y\a\7\e\a\l\i\e\f\c\g\a\w\b\j\q\a\6\5\b\5\f\n\v\z\4\5\0\9\w\c\0\b\y\e\1\n\w\5\7\k\o\y\l\z\j\e\n\w\1\t\w\j\g\3\f\j\h\i\b\8\2\j\g\6\p\a\6\4\b\j\w\i\i\6\5\d\4\4\j\g\h\j\m\y\n\d\l\x\7\z\5\9\a\l\5\k\u\l\l\9\h\6\d\f\g\3\7\q\2\2\5\n\n\q\c\j\w\w\o\3\n\t\a ]] 00:28:13.136 00:28:13.136 real 0m13.093s 00:28:13.136 user 0m10.274s 00:28:13.136 sys 0m1.702s 00:28:13.136 13:13:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.136 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:28:13.136 13:13:31 -- dd/posix.sh@1 -- # cleanup 00:28:13.136 13:13:31 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:13.136 13:13:31 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:13.136 00:28:13.136 real 0m54.940s 00:28:13.136 user 0m41.472s 00:28:13.136 sys 0m7.272s 00:28:13.136 13:13:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.136 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:28:13.136 ************************************ 00:28:13.136 END TEST spdk_dd_posix 00:28:13.136 ************************************ 00:28:13.136 13:13:31 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:13.136 13:13:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:13.136 13:13:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:13.136 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:28:13.136 ************************************ 00:28:13.136 START TEST spdk_dd_malloc 00:28:13.136 ************************************ 00:28:13.136 13:13:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:13.136 * Looking for test storage... 00:28:13.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:13.136 13:13:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:13.136 13:13:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.136 13:13:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.136 13:13:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.136 13:13:31 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:13.136 13:13:31 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:13.136 13:13:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:13.136 13:13:31 -- paths/export.sh@5 -- # export PATH 00:28:13.137 13:13:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:13.137 13:13:31 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:28:13.137 13:13:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:13.137 13:13:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:13.137 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:28:13.137 ************************************ 00:28:13.137 START TEST dd_malloc_copy 00:28:13.137 ************************************ 00:28:13.137 13:13:31 -- common/autotest_common.sh@1104 -- # malloc_copy 00:28:13.137 13:13:31 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:28:13.137 13:13:31 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:28:13.137 13:13:31 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:28:13.137 13:13:31 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:28:13.137 13:13:31 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:28:13.137 13:13:31 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:28:13.137 13:13:31 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:28:13.137 13:13:31 -- dd/malloc.sh@28 -- # gen_conf 00:28:13.137 13:13:31 -- dd/common.sh@31 -- # xtrace_disable 00:28:13.137 13:13:31 -- common/autotest_common.sh@10 -- # set +x 00:28:13.137 [2024-06-11 13:13:31.816665] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:13.137 [2024-06-11 13:13:31.817020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139339 ] 00:28:13.137 { 00:28:13.137 "subsystems": [ 00:28:13.137 { 00:28:13.137 "subsystem": "bdev", 00:28:13.137 "config": [ 00:28:13.137 { 00:28:13.137 "params": { 00:28:13.137 "num_blocks": 1048576, 00:28:13.137 "block_size": 512, 00:28:13.137 "name": "malloc0" 00:28:13.137 }, 00:28:13.137 "method": "bdev_malloc_create" 00:28:13.137 }, 00:28:13.137 { 00:28:13.137 "params": { 00:28:13.137 "num_blocks": 1048576, 00:28:13.137 "block_size": 512, 00:28:13.137 "name": "malloc1" 00:28:13.137 }, 00:28:13.137 "method": "bdev_malloc_create" 00:28:13.137 }, 00:28:13.137 { 00:28:13.137 "method": "bdev_wait_for_examine" 00:28:13.137 } 00:28:13.137 ] 00:28:13.137 } 00:28:13.137 ] 00:28:13.137 } 00:28:13.396 [2024-06-11 13:13:31.983909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.396 [2024-06-11 13:13:32.162962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.348  Copying: 214/512 [MB] (214 MBps) Copying: 430/512 [MB] (216 MBps) Copying: 512/512 [MB] (average 216 MBps) 00:28:20.348 00:28:20.348 13:13:38 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:28:20.348 13:13:38 -- dd/malloc.sh@33 -- # gen_conf 00:28:20.348 13:13:38 -- dd/common.sh@31 -- # xtrace_disable 00:28:20.348 13:13:38 -- common/autotest_common.sh@10 -- # set +x 00:28:20.348 [2024-06-11 13:13:38.806880] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:20.348 [2024-06-11 13:13:38.807854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139448 ] 00:28:20.348 { 00:28:20.348 "subsystems": [ 00:28:20.348 { 00:28:20.348 "subsystem": "bdev", 00:28:20.348 "config": [ 00:28:20.348 { 00:28:20.348 "params": { 00:28:20.348 "num_blocks": 1048576, 00:28:20.348 "block_size": 512, 00:28:20.348 "name": "malloc0" 00:28:20.348 }, 00:28:20.348 "method": "bdev_malloc_create" 00:28:20.348 }, 00:28:20.348 { 00:28:20.348 "params": { 00:28:20.348 "num_blocks": 1048576, 00:28:20.348 "block_size": 512, 00:28:20.348 "name": "malloc1" 00:28:20.348 }, 00:28:20.348 "method": "bdev_malloc_create" 00:28:20.348 }, 00:28:20.348 { 00:28:20.348 "method": "bdev_wait_for_examine" 00:28:20.348 } 00:28:20.348 ] 00:28:20.348 } 00:28:20.348 ] 00:28:20.348 } 00:28:20.348 [2024-06-11 13:13:38.975045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.348 [2024-06-11 13:13:39.144636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.370  Copying: 213/512 [MB] (213 MBps) Copying: 429/512 [MB] (216 MBps) Copying: 512/512 [MB] (average 214 MBps) 00:28:27.370 00:28:27.370 ************************************ 00:28:27.370 END TEST dd_malloc_copy 00:28:27.370 ************************************ 00:28:27.370 00:28:27.370 real 0m13.996s 00:28:27.370 user 0m12.793s 00:28:27.370 sys 0m1.065s 00:28:27.370 13:13:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.370 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:28:27.370 ************************************ 00:28:27.370 END TEST spdk_dd_malloc 00:28:27.370 ************************************ 00:28:27.370 00:28:27.370 real 0m14.121s 00:28:27.370 user 0m12.872s 00:28:27.370 sys 0m1.112s 00:28:27.370 13:13:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.370 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:28:27.370 13:13:45 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:27.370 13:13:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:27.370 13:13:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.370 13:13:45 -- common/autotest_common.sh@10 -- # set +x 00:28:27.370 ************************************ 00:28:27.370 START TEST spdk_dd_bdev_to_bdev 00:28:27.370 ************************************ 00:28:27.370 13:13:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:27.370 * Looking for test storage... 00:28:27.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:27.370 13:13:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:27.370 13:13:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.370 13:13:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.370 13:13:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.370 13:13:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:27.370 13:13:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:27.370 13:13:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:27.370 13:13:45 -- paths/export.sh@5 -- # export PATH 00:28:27.370 13:13:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:27.370 13:13:45 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:28:27.371 13:13:45 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:28:27.371 13:13:45 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:28:27.371 13:13:45 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:28:27.371 13:13:45 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:28:27.371 13:13:45 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:28:27.371 [2024-06-11 13:13:45.962965] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:27.371 [2024-06-11 13:13:45.963683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139589 ] 00:28:27.371 [2024-06-11 13:13:46.117351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.629 [2024-06-11 13:13:46.292093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.267  Copying: 256/256 [MB] (average 1454 MBps) 00:28:29.267 00:28:29.267 13:13:47 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:29.267 13:13:47 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:29.267 13:13:47 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:28:29.267 13:13:47 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:28:29.267 13:13:47 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:29.267 13:13:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:29.267 13:13:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:29.267 13:13:47 -- common/autotest_common.sh@10 -- # set +x 00:28:29.267 ************************************ 00:28:29.267 START TEST dd_inflate_file 00:28:29.267 ************************************ 00:28:29.267 13:13:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:29.267 [2024-06-11 13:13:47.808787] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:29.267 [2024-06-11 13:13:47.808982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139643 ] 00:28:29.267 [2024-06-11 13:13:47.969524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.525 [2024-06-11 13:13:48.142291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.722  Copying: 64/64 [MB] (average 1454 MBps) 00:28:30.722 00:28:30.722 00:28:30.722 real 0m1.708s 00:28:30.722 user 0m1.276s 00:28:30.722 sys 0m0.285s 00:28:30.722 ************************************ 00:28:30.722 END TEST dd_inflate_file 00:28:30.722 ************************************ 00:28:30.722 13:13:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.722 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.722 13:13:49 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:28:30.722 13:13:49 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:28:30.722 13:13:49 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:30.722 13:13:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:28:30.722 13:13:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:30.722 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.722 13:13:49 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:28:30.722 13:13:49 -- dd/common.sh@31 -- # xtrace_disable 00:28:30.722 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.722 ************************************ 00:28:30.722 START TEST dd_copy_to_out_bdev 00:28:30.722 ************************************ 00:28:30.722 13:13:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:30.980 { 00:28:30.980 "subsystems": [ 00:28:30.980 { 00:28:30.980 "subsystem": "bdev", 00:28:30.980 "config": [ 00:28:30.980 { 00:28:30.980 "params": { 00:28:30.980 "block_size": 4096, 00:28:30.980 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:30.980 "name": "aio1" 00:28:30.980 }, 00:28:30.980 "method": "bdev_aio_create" 00:28:30.980 }, 00:28:30.980 { 00:28:30.980 "params": { 00:28:30.980 "trtype": "pcie", 00:28:30.980 "traddr": "0000:00:06.0", 00:28:30.980 "name": "Nvme0" 00:28:30.980 }, 00:28:30.980 "method": "bdev_nvme_attach_controller" 00:28:30.980 }, 00:28:30.980 { 00:28:30.980 "method": "bdev_wait_for_examine" 00:28:30.980 } 00:28:30.980 ] 00:28:30.980 } 00:28:30.980 ] 00:28:30.980 } 00:28:30.980 [2024-06-11 13:13:49.580782] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:30.980 [2024-06-11 13:13:49.580974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139696 ] 00:28:30.980 [2024-06-11 13:13:49.748972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.239 [2024-06-11 13:13:49.946590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.117  Copying: 40/64 [MB] (40 MBps) Copying: 64/64 [MB] (average 40 MBps) 00:28:34.117 00:28:34.117 00:28:34.117 real 0m3.412s 00:28:34.117 user 0m3.011s 00:28:34.117 sys 0m0.310s 00:28:34.117 ************************************ 00:28:34.117 END TEST dd_copy_to_out_bdev 00:28:34.117 ************************************ 00:28:34.117 13:13:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.117 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.376 13:13:52 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:28:34.376 13:13:52 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:28:34.376 13:13:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:34.376 13:13:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:34.376 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.376 ************************************ 00:28:34.376 START TEST dd_offset_magic 00:28:34.376 ************************************ 00:28:34.376 13:13:52 -- common/autotest_common.sh@1104 -- # offset_magic 00:28:34.376 13:13:52 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:28:34.376 13:13:52 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:28:34.376 13:13:52 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:28:34.376 13:13:52 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:34.376 13:13:52 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:28:34.376 13:13:52 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:34.376 13:13:52 -- dd/common.sh@31 -- # xtrace_disable 00:28:34.376 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.376 [2024-06-11 13:13:53.042169] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:34.376 [2024-06-11 13:13:53.042563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139760 ] 00:28:34.376 { 00:28:34.376 "subsystems": [ 00:28:34.376 { 00:28:34.376 "subsystem": "bdev", 00:28:34.376 "config": [ 00:28:34.376 { 00:28:34.376 "params": { 00:28:34.376 "block_size": 4096, 00:28:34.376 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:34.376 "name": "aio1" 00:28:34.376 }, 00:28:34.376 "method": "bdev_aio_create" 00:28:34.376 }, 00:28:34.376 { 00:28:34.376 "params": { 00:28:34.376 "trtype": "pcie", 00:28:34.376 "traddr": "0000:00:06.0", 00:28:34.376 "name": "Nvme0" 00:28:34.376 }, 00:28:34.376 "method": "bdev_nvme_attach_controller" 00:28:34.376 }, 00:28:34.376 { 00:28:34.376 "method": "bdev_wait_for_examine" 00:28:34.376 } 00:28:34.376 ] 00:28:34.376 } 00:28:34.376 ] 00:28:34.376 } 00:28:34.376 [2024-06-11 13:13:53.212310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.635 [2024-06-11 13:13:53.391740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.165  Copying: 65/65 [MB] (average 250 MBps) 00:28:36.165 00:28:36.165 13:13:54 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:28:36.165 13:13:54 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:36.165 13:13:54 -- dd/common.sh@31 -- # xtrace_disable 00:28:36.165 13:13:54 -- common/autotest_common.sh@10 -- # set +x 00:28:36.165 [2024-06-11 13:13:54.997814] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:36.165 [2024-06-11 13:13:54.998199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139793 ] 00:28:36.165 { 00:28:36.165 "subsystems": [ 00:28:36.165 { 00:28:36.165 "subsystem": "bdev", 00:28:36.165 "config": [ 00:28:36.165 { 00:28:36.165 "params": { 00:28:36.165 "block_size": 4096, 00:28:36.165 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:36.165 "name": "aio1" 00:28:36.165 }, 00:28:36.165 "method": "bdev_aio_create" 00:28:36.165 }, 00:28:36.165 { 00:28:36.165 "params": { 00:28:36.165 "trtype": "pcie", 00:28:36.165 "traddr": "0000:00:06.0", 00:28:36.165 "name": "Nvme0" 00:28:36.165 }, 00:28:36.165 "method": "bdev_nvme_attach_controller" 00:28:36.165 }, 00:28:36.165 { 00:28:36.165 "method": "bdev_wait_for_examine" 00:28:36.165 } 00:28:36.165 ] 00:28:36.165 } 00:28:36.165 ] 00:28:36.165 } 00:28:36.424 [2024-06-11 13:13:55.167491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.683 [2024-06-11 13:13:55.338182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.878  Copying: 1024/1024 [kB] (average 500 MBps) 00:28:37.878 00:28:38.137 13:13:56 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:38.137 13:13:56 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:38.137 13:13:56 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:38.137 13:13:56 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:28:38.137 13:13:56 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:38.137 13:13:56 -- dd/common.sh@31 -- # xtrace_disable 00:28:38.137 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:28:38.137 [2024-06-11 13:13:56.781121] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:38.137 [2024-06-11 13:13:56.781519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139827 ] 00:28:38.137 { 00:28:38.137 "subsystems": [ 00:28:38.137 { 00:28:38.137 "subsystem": "bdev", 00:28:38.137 "config": [ 00:28:38.137 { 00:28:38.137 "params": { 00:28:38.137 "block_size": 4096, 00:28:38.137 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:38.137 "name": "aio1" 00:28:38.137 }, 00:28:38.137 "method": "bdev_aio_create" 00:28:38.137 }, 00:28:38.137 { 00:28:38.137 "params": { 00:28:38.137 "trtype": "pcie", 00:28:38.137 "traddr": "0000:00:06.0", 00:28:38.137 "name": "Nvme0" 00:28:38.137 }, 00:28:38.137 "method": "bdev_nvme_attach_controller" 00:28:38.137 }, 00:28:38.137 { 00:28:38.137 "method": "bdev_wait_for_examine" 00:28:38.137 } 00:28:38.137 ] 00:28:38.137 } 00:28:38.137 ] 00:28:38.137 } 00:28:38.137 [2024-06-11 13:13:56.948007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.396 [2024-06-11 13:13:57.128398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.901  Copying: 65/65 [MB] (average 317 MBps) 00:28:39.901 00:28:39.901 13:13:58 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:28:39.901 13:13:58 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:39.901 13:13:58 -- dd/common.sh@31 -- # xtrace_disable 00:28:39.901 13:13:58 -- common/autotest_common.sh@10 -- # set +x 00:28:39.901 [2024-06-11 13:13:58.686968] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:39.901 [2024-06-11 13:13:58.687319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139872 ] 00:28:39.901 { 00:28:39.901 "subsystems": [ 00:28:39.901 { 00:28:39.901 "subsystem": "bdev", 00:28:39.901 "config": [ 00:28:39.901 { 00:28:39.901 "params": { 00:28:39.901 "block_size": 4096, 00:28:39.901 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:39.901 "name": "aio1" 00:28:39.901 }, 00:28:39.901 "method": "bdev_aio_create" 00:28:39.901 }, 00:28:39.901 { 00:28:39.901 "params": { 00:28:39.901 "trtype": "pcie", 00:28:39.901 "traddr": "0000:00:06.0", 00:28:39.901 "name": "Nvme0" 00:28:39.901 }, 00:28:39.901 "method": "bdev_nvme_attach_controller" 00:28:39.901 }, 00:28:39.901 { 00:28:39.901 "method": "bdev_wait_for_examine" 00:28:39.901 } 00:28:39.901 ] 00:28:39.901 } 00:28:39.901 ] 00:28:39.901 } 00:28:40.161 [2024-06-11 13:13:58.853182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.420 [2024-06-11 13:13:59.029995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.613  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:41.613 00:28:41.613 ************************************ 00:28:41.613 END TEST dd_offset_magic 00:28:41.613 ************************************ 00:28:41.613 13:14:00 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:41.613 13:14:00 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:41.613 00:28:41.613 real 0m7.452s 00:28:41.613 user 0m5.667s 00:28:41.613 sys 0m1.014s 00:28:41.613 13:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:41.613 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:28:41.872 13:14:00 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:28:41.872 13:14:00 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:28:41.872 13:14:00 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:41.872 13:14:00 -- dd/common.sh@11 -- # local nvme_ref= 00:28:41.872 13:14:00 -- dd/common.sh@12 -- # local size=4194330 00:28:41.872 13:14:00 -- dd/common.sh@14 -- # local bs=1048576 00:28:41.872 13:14:00 -- dd/common.sh@15 -- # local count=5 00:28:41.872 13:14:00 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:28:41.872 13:14:00 -- dd/common.sh@18 -- # gen_conf 00:28:41.872 13:14:00 -- dd/common.sh@31 -- # xtrace_disable 00:28:41.872 13:14:00 -- common/autotest_common.sh@10 -- # set +x 00:28:41.872 [2024-06-11 13:14:00.529843] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:41.872 [2024-06-11 13:14:00.530226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139921 ] 00:28:41.872 { 00:28:41.872 "subsystems": [ 00:28:41.872 { 00:28:41.872 "subsystem": "bdev", 00:28:41.872 "config": [ 00:28:41.872 { 00:28:41.872 "params": { 00:28:41.872 "block_size": 4096, 00:28:41.872 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:41.872 "name": "aio1" 00:28:41.872 }, 00:28:41.873 "method": "bdev_aio_create" 00:28:41.873 }, 00:28:41.873 { 00:28:41.873 "params": { 00:28:41.873 "trtype": "pcie", 00:28:41.873 "traddr": "0000:00:06.0", 00:28:41.873 "name": "Nvme0" 00:28:41.873 }, 00:28:41.873 "method": "bdev_nvme_attach_controller" 00:28:41.873 }, 00:28:41.873 { 00:28:41.873 "method": "bdev_wait_for_examine" 00:28:41.873 } 00:28:41.873 ] 00:28:41.873 } 00:28:41.873 ] 00:28:41.873 } 00:28:41.873 [2024-06-11 13:14:00.696225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.131 [2024-06-11 13:14:00.877977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.634  Copying: 5120/5120 [kB] (average 1000 MBps) 00:28:43.634 00:28:43.634 13:14:02 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:28:43.634 13:14:02 -- dd/common.sh@10 -- # local bdev=aio1 00:28:43.634 13:14:02 -- dd/common.sh@11 -- # local nvme_ref= 00:28:43.634 13:14:02 -- dd/common.sh@12 -- # local size=4194330 00:28:43.634 13:14:02 -- dd/common.sh@14 -- # local bs=1048576 00:28:43.634 13:14:02 -- dd/common.sh@15 -- # local count=5 00:28:43.635 13:14:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:28:43.635 13:14:02 -- dd/common.sh@18 -- # gen_conf 00:28:43.635 13:14:02 -- dd/common.sh@31 -- # xtrace_disable 00:28:43.635 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:28:43.635 { 00:28:43.635 "subsystems": [ 00:28:43.635 { 00:28:43.635 "subsystem": "bdev", 00:28:43.635 "config": [ 00:28:43.635 { 00:28:43.635 "params": { 00:28:43.635 "block_size": 4096, 00:28:43.635 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:43.635 "name": "aio1" 00:28:43.635 }, 00:28:43.635 "method": "bdev_aio_create" 00:28:43.635 }, 00:28:43.635 { 00:28:43.635 "params": { 00:28:43.635 "trtype": "pcie", 00:28:43.635 "traddr": "0000:00:06.0", 00:28:43.635 "name": "Nvme0" 00:28:43.635 }, 00:28:43.635 "method": "bdev_nvme_attach_controller" 00:28:43.635 }, 00:28:43.635 { 00:28:43.635 "method": "bdev_wait_for_examine" 00:28:43.635 } 00:28:43.635 ] 00:28:43.635 } 00:28:43.635 ] 00:28:43.635 } 00:28:43.635 [2024-06-11 13:14:02.210435] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:43.635 [2024-06-11 13:14:02.210976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139943 ] 00:28:43.635 [2024-06-11 13:14:02.379521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.893 [2024-06-11 13:14:02.552002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.526  Copying: 5120/5120 [kB] (average 263 MBps) 00:28:45.526 00:28:45.526 13:14:03 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:45.526 ************************************ 00:28:45.526 END TEST spdk_dd_bdev_to_bdev 00:28:45.526 ************************************ 00:28:45.526 00:28:45.526 real 0m18.185s 00:28:45.527 user 0m14.147s 00:28:45.527 sys 0m2.648s 00:28:45.527 13:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.527 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:28:45.527 13:14:04 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:28:45.527 13:14:04 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:28:45.527 13:14:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:45.527 13:14:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:45.527 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:28:45.527 ************************************ 00:28:45.527 START TEST spdk_dd_sparse 00:28:45.527 ************************************ 00:28:45.527 13:14:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:28:45.527 * Looking for test storage... 00:28:45.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:45.527 13:14:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:45.527 13:14:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.527 13:14:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.527 13:14:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.527 13:14:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:45.527 13:14:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:45.527 13:14:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:45.527 13:14:04 -- paths/export.sh@5 -- # export PATH 00:28:45.527 13:14:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:45.527 13:14:04 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:28:45.527 13:14:04 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:28:45.527 13:14:04 -- dd/sparse.sh@110 -- # file1=file_zero1 00:28:45.527 13:14:04 -- dd/sparse.sh@111 -- # file2=file_zero2 00:28:45.527 13:14:04 -- dd/sparse.sh@112 -- # file3=file_zero3 00:28:45.527 13:14:04 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:28:45.527 13:14:04 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:28:45.527 13:14:04 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:28:45.527 13:14:04 -- dd/sparse.sh@118 -- # prepare 00:28:45.527 13:14:04 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:28:45.527 13:14:04 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:28:45.527 1+0 records in 00:28:45.527 1+0 records out 00:28:45.527 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00823259 s, 509 MB/s 00:28:45.527 13:14:04 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:28:45.527 1+0 records in 00:28:45.527 1+0 records out 00:28:45.527 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.008547 s, 491 MB/s 00:28:45.527 13:14:04 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:28:45.527 1+0 records in 00:28:45.527 1+0 records out 00:28:45.527 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00814893 s, 515 MB/s 00:28:45.527 13:14:04 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:28:45.527 13:14:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:45.527 13:14:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:45.527 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:28:45.527 ************************************ 00:28:45.527 START TEST dd_sparse_file_to_file 00:28:45.527 ************************************ 00:28:45.527 13:14:04 -- common/autotest_common.sh@1104 -- # file_to_file 00:28:45.527 13:14:04 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:28:45.527 13:14:04 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:28:45.527 13:14:04 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:45.527 13:14:04 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:28:45.527 13:14:04 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:28:45.527 13:14:04 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:28:45.527 13:14:04 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:28:45.527 13:14:04 -- dd/sparse.sh@41 -- # gen_conf 00:28:45.527 13:14:04 -- dd/common.sh@31 -- # xtrace_disable 00:28:45.527 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:28:45.527 { 00:28:45.527 "subsystems": [ 00:28:45.527 { 00:28:45.527 "subsystem": "bdev", 00:28:45.527 "config": [ 00:28:45.527 { 00:28:45.527 "params": { 00:28:45.527 "block_size": 4096, 00:28:45.527 "filename": "dd_sparse_aio_disk", 00:28:45.527 "name": "dd_aio" 00:28:45.527 }, 00:28:45.527 "method": "bdev_aio_create" 00:28:45.527 }, 00:28:45.527 { 00:28:45.527 "params": { 00:28:45.527 "lvs_name": "dd_lvstore", 00:28:45.527 "bdev_name": "dd_aio" 00:28:45.527 }, 00:28:45.527 "method": "bdev_lvol_create_lvstore" 00:28:45.527 }, 00:28:45.527 { 00:28:45.527 "method": "bdev_wait_for_examine" 00:28:45.527 } 00:28:45.527 ] 00:28:45.527 } 00:28:45.527 ] 00:28:45.527 } 00:28:45.527 [2024-06-11 13:14:04.275625] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:45.527 [2024-06-11 13:14:04.276166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140031 ] 00:28:45.785 [2024-06-11 13:14:04.444351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.785 [2024-06-11 13:14:04.614202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.310  Copying: 12/36 [MB] (average 1200 MBps) 00:28:47.310 00:28:47.310 13:14:06 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:28:47.310 13:14:06 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:28:47.310 13:14:06 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:28:47.310 13:14:06 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:28:47.310 13:14:06 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:28:47.310 13:14:06 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:28:47.310 13:14:06 -- dd/sparse.sh@52 -- # stat1_b=24576 00:28:47.310 13:14:06 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:28:47.310 13:14:06 -- dd/sparse.sh@53 -- # stat2_b=24576 00:28:47.310 13:14:06 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:28:47.310 00:28:47.310 real 0m1.855s 00:28:47.310 user 0m1.474s 00:28:47.310 sys 0m0.252s 00:28:47.310 13:14:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.310 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:28:47.310 ************************************ 00:28:47.310 END TEST dd_sparse_file_to_file 00:28:47.310 ************************************ 00:28:47.310 13:14:06 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:28:47.310 13:14:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:47.310 13:14:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.310 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:28:47.310 ************************************ 00:28:47.310 START TEST dd_sparse_file_to_bdev 00:28:47.310 ************************************ 00:28:47.310 13:14:06 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:28:47.310 13:14:06 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:47.310 13:14:06 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:28:47.310 13:14:06 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:28:47.310 13:14:06 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:28:47.311 13:14:06 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:28:47.311 13:14:06 -- dd/sparse.sh@73 -- # gen_conf 00:28:47.311 13:14:06 -- dd/common.sh@31 -- # xtrace_disable 00:28:47.311 13:14:06 -- common/autotest_common.sh@10 -- # set +x 00:28:47.569 [2024-06-11 13:14:06.169019] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:47.569 [2024-06-11 13:14:06.169729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140091 ] 00:28:47.569 { 00:28:47.569 "subsystems": [ 00:28:47.569 { 00:28:47.569 "subsystem": "bdev", 00:28:47.569 "config": [ 00:28:47.569 { 00:28:47.569 "params": { 00:28:47.569 "block_size": 4096, 00:28:47.569 "filename": "dd_sparse_aio_disk", 00:28:47.569 "name": "dd_aio" 00:28:47.569 }, 00:28:47.569 "method": "bdev_aio_create" 00:28:47.569 }, 00:28:47.569 { 00:28:47.569 "params": { 00:28:47.569 "lvs_name": "dd_lvstore", 00:28:47.569 "thin_provision": true, 00:28:47.569 "lvol_name": "dd_lvol", 00:28:47.569 "size": 37748736 00:28:47.569 }, 00:28:47.569 "method": "bdev_lvol_create" 00:28:47.569 }, 00:28:47.569 { 00:28:47.569 "method": "bdev_wait_for_examine" 00:28:47.569 } 00:28:47.569 ] 00:28:47.569 } 00:28:47.569 ] 00:28:47.569 } 00:28:47.569 [2024-06-11 13:14:06.323962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.828 [2024-06-11 13:14:06.509016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.087 [2024-06-11 13:14:06.771966] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:28:48.087  Copying: 12/36 [MB] (average 521 MBps)[2024-06-11 13:14:06.828909] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:28:49.465 00:28:49.465 00:28:49.465 00:28:49.465 real 0m1.824s 00:28:49.465 user 0m1.487s 00:28:49.465 sys 0m0.234s 00:28:49.465 13:14:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.465 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:28:49.465 ************************************ 00:28:49.465 END TEST dd_sparse_file_to_bdev 00:28:49.465 ************************************ 00:28:49.465 13:14:07 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:28:49.465 13:14:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:49.465 13:14:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:49.465 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:28:49.465 ************************************ 00:28:49.465 START TEST dd_sparse_bdev_to_file 00:28:49.465 ************************************ 00:28:49.465 13:14:07 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:28:49.465 13:14:07 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:28:49.465 13:14:07 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:28:49.465 13:14:07 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:49.465 13:14:07 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:28:49.465 13:14:07 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:28:49.465 13:14:07 -- dd/sparse.sh@91 -- # gen_conf 00:28:49.465 13:14:07 -- dd/common.sh@31 -- # xtrace_disable 00:28:49.465 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:28:49.465 { 00:28:49.465 "subsystems": [ 00:28:49.465 { 00:28:49.466 "subsystem": "bdev", 00:28:49.466 "config": [ 00:28:49.466 { 00:28:49.466 "params": { 00:28:49.466 "block_size": 4096, 00:28:49.466 "filename": "dd_sparse_aio_disk", 00:28:49.466 "name": "dd_aio" 00:28:49.466 }, 00:28:49.466 "method": "bdev_aio_create" 00:28:49.466 }, 00:28:49.466 { 00:28:49.466 "method": "bdev_wait_for_examine" 00:28:49.466 } 00:28:49.466 ] 00:28:49.466 } 00:28:49.466 ] 00:28:49.466 } 00:28:49.466 [2024-06-11 13:14:08.044288] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:49.466 [2024-06-11 13:14:08.044481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140169 ] 00:28:49.466 [2024-06-11 13:14:08.209176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.724 [2024-06-11 13:14:08.375607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.917  Copying: 12/36 [MB] (average 1200 MBps) 00:28:50.917 00:28:50.917 13:14:09 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:28:50.917 13:14:09 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:28:50.917 13:14:09 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:28:50.917 13:14:09 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:28:50.917 13:14:09 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:28:50.917 13:14:09 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:28:50.917 13:14:09 -- dd/sparse.sh@102 -- # stat2_b=24576 00:28:50.917 13:14:09 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:28:50.917 13:14:09 -- dd/sparse.sh@103 -- # stat3_b=24576 00:28:50.917 13:14:09 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:28:50.917 00:28:50.917 real 0m1.769s 00:28:50.917 user 0m1.422s 00:28:50.917 sys 0m0.242s 00:28:50.917 13:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.917 ************************************ 00:28:50.917 END TEST dd_sparse_bdev_to_file 00:28:50.917 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:50.917 ************************************ 00:28:51.176 13:14:09 -- dd/sparse.sh@1 -- # cleanup 00:28:51.176 13:14:09 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:28:51.176 13:14:09 -- dd/sparse.sh@12 -- # rm file_zero1 00:28:51.176 13:14:09 -- dd/sparse.sh@13 -- # rm file_zero2 00:28:51.176 13:14:09 -- dd/sparse.sh@14 -- # rm file_zero3 00:28:51.176 00:28:51.176 real 0m5.737s 00:28:51.176 user 0m4.528s 00:28:51.176 sys 0m0.859s 00:28:51.176 13:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.176 ************************************ 00:28:51.176 END TEST spdk_dd_sparse 00:28:51.176 ************************************ 00:28:51.176 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 13:14:09 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:28:51.176 13:14:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:51.176 13:14:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.176 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 ************************************ 00:28:51.176 START TEST spdk_dd_negative 00:28:51.176 ************************************ 00:28:51.176 13:14:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:28:51.176 * Looking for test storage... 00:28:51.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:51.176 13:14:09 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:51.176 13:14:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.176 13:14:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.176 13:14:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.176 13:14:09 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:51.176 13:14:09 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:51.176 13:14:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:51.176 13:14:09 -- paths/export.sh@5 -- # export PATH 00:28:51.176 13:14:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:51.176 13:14:09 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:51.176 13:14:09 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:51.176 13:14:09 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:51.176 13:14:09 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:51.176 13:14:09 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:28:51.176 13:14:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:51.176 13:14:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.176 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:28:51.176 ************************************ 00:28:51.176 START TEST dd_invalid_arguments 00:28:51.176 ************************************ 00:28:51.176 13:14:09 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:28:51.176 13:14:09 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:28:51.176 13:14:09 -- common/autotest_common.sh@640 -- # local es=0 00:28:51.176 13:14:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:28:51.177 13:14:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.177 13:14:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.177 13:14:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.177 13:14:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.177 13:14:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.177 13:14:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.177 13:14:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.177 13:14:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:51.177 13:14:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:28:51.177 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:28:51.177 options: 00:28:51.177 -c, --config JSON config file (default none) 00:28:51.177 --json JSON config file (default none) 00:28:51.177 --json-ignore-init-errors 00:28:51.177 don't exit on invalid config entry 00:28:51.177 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:28:51.177 -g, --single-file-segments 00:28:51.177 force creating just one hugetlbfs file 00:28:51.177 -h, --help show this usage 00:28:51.177 -i, --shm-id shared memory ID (optional) 00:28:51.177 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:28:51.177 --lcores lcore to CPU mapping list. The list is in the format: 00:28:51.177 [<,lcores[@CPUs]>...] 00:28:51.177 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:28:51.177 Within the group, '-' is used for range separator, 00:28:51.177 ',' is used for single number separator. 00:28:51.177 '( )' can be omitted for single element group, 00:28:51.177 '@' can be omitted if cpus and lcores have the same value 00:28:51.177 -n, --mem-channels channel number of memory channels used for DPDK 00:28:51.177 -p, --main-core main (primary) core for DPDK 00:28:51.177 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:28:51.177 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:28:51.177 --disable-cpumask-locks Disable CPU core lock files. 00:28:51.177 --silence-noticelog disable notice level logging to stderr 00:28:51.177 --msg-mempool-size global message memory pool size in count (default: 262143) 00:28:51.177 -u, --no-pci disable PCI access 00:28:51.177 --wait-for-rpc wait for RPCs to initialize subsystems 00:28:51.177 --max-delay maximum reactor delay (in microseconds) 00:28:51.177 -B, --pci-blocked pci addr to block (can be used more than once) 00:28:51.177 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:28:51.177 -R, --huge-unlink unlink huge files after initialization 00:28:51.177 -v, --version print SPDK version 00:28:51.177 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:28:51.177 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:28:51.177 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:28:51.177 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:28:51.177 Tracepoints vary in size and can use more than one trace entry. 00:28:51.177 --rpcs-allowed comma-separated list of permitted RPCS 00:28:51.177 --env-context Opaque context for use of the env implementation 00:28:51.177 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:28:51.177 --no-huge run without using hugepages 00:28:51.177 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:28:51.177 -e, --tpoint-group [:] 00:28:51.177 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:28:51.177 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:28:51.177 Groups and /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:28:51.177 [2024-06-11 13:14:10.017363] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:28:51.435 masks can be combined (e.g. thread,bdev:0x1). 00:28:51.435 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:28:51.435 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:28:51.435 [--------- DD Options ---------] 00:28:51.435 --if Input file. Must specify either --if or --ib. 00:28:51.435 --ib Input bdev. Must specifier either --if or --ib 00:28:51.435 --of Output file. Must specify either --of or --ob. 00:28:51.435 --ob Output bdev. Must specify either --of or --ob. 00:28:51.435 --iflag Input file flags. 00:28:51.435 --oflag Output file flags. 00:28:51.435 --bs I/O unit size (default: 4096) 00:28:51.435 --qd Queue depth (default: 2) 00:28:51.435 --count I/O unit count. The number of I/O units to copy. (default: all) 00:28:51.435 --skip Skip this many I/O units at start of input. (default: 0) 00:28:51.435 --seek Skip this many I/O units at start of output. (default: 0) 00:28:51.435 --aio Force usage of AIO. (by default io_uring is used if available) 00:28:51.435 --sparse Enable hole skipping in input target 00:28:51.435 Available iflag and oflag values: 00:28:51.435 append - append mode 00:28:51.435 direct - use direct I/O for data 00:28:51.435 directory - fail unless a directory 00:28:51.435 dsync - use synchronized I/O for data 00:28:51.435 noatime - do not update access time 00:28:51.435 noctty - do not assign controlling terminal from file 00:28:51.435 nofollow - do not follow symlinks 00:28:51.435 nonblock - use non-blocking I/O 00:28:51.435 sync - use synchronized I/O for data and metadata 00:28:51.435 13:14:10 -- common/autotest_common.sh@643 -- # es=2 00:28:51.435 13:14:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:51.435 13:14:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:51.435 13:14:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:51.435 00:28:51.435 real 0m0.106s 00:28:51.435 user 0m0.042s 00:28:51.435 sys 0m0.064s 00:28:51.435 13:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.435 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.435 ************************************ 00:28:51.435 END TEST dd_invalid_arguments 00:28:51.435 ************************************ 00:28:51.435 13:14:10 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:28:51.435 13:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:51.435 13:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.435 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.435 ************************************ 00:28:51.435 START TEST dd_double_input 00:28:51.435 ************************************ 00:28:51.435 13:14:10 -- common/autotest_common.sh@1104 -- # double_input 00:28:51.435 13:14:10 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:28:51.435 13:14:10 -- common/autotest_common.sh@640 -- # local es=0 00:28:51.435 13:14:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:28:51.435 13:14:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.435 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.435 13:14:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.435 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.435 13:14:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.435 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.435 13:14:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.435 13:14:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:51.435 13:14:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:28:51.435 [2024-06-11 13:14:10.169265] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:28:51.435 13:14:10 -- common/autotest_common.sh@643 -- # es=22 00:28:51.435 13:14:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:51.435 13:14:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:51.435 13:14:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:51.435 00:28:51.435 real 0m0.103s 00:28:51.435 user 0m0.055s 00:28:51.435 sys 0m0.046s 00:28:51.435 13:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.435 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.435 ************************************ 00:28:51.435 END TEST dd_double_input 00:28:51.435 ************************************ 00:28:51.435 13:14:10 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:28:51.435 13:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:51.435 13:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.435 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.435 ************************************ 00:28:51.435 START TEST dd_double_output 00:28:51.435 ************************************ 00:28:51.435 13:14:10 -- common/autotest_common.sh@1104 -- # double_output 00:28:51.435 13:14:10 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:28:51.435 13:14:10 -- common/autotest_common.sh@640 -- # local es=0 00:28:51.435 13:14:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:28:51.435 13:14:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.435 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.435 13:14:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.435 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.435 13:14:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.435 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.435 13:14:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.435 13:14:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:51.435 13:14:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:28:51.693 [2024-06-11 13:14:10.322137] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:28:51.693 13:14:10 -- common/autotest_common.sh@643 -- # es=22 00:28:51.693 13:14:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:51.693 13:14:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:51.693 13:14:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:51.693 00:28:51.693 real 0m0.102s 00:28:51.693 user 0m0.042s 00:28:51.693 sys 0m0.059s 00:28:51.693 13:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.693 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.693 ************************************ 00:28:51.693 END TEST dd_double_output 00:28:51.693 ************************************ 00:28:51.693 13:14:10 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:28:51.693 13:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:51.693 13:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.693 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.693 ************************************ 00:28:51.693 START TEST dd_no_input 00:28:51.693 ************************************ 00:28:51.693 13:14:10 -- common/autotest_common.sh@1104 -- # no_input 00:28:51.693 13:14:10 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:28:51.693 13:14:10 -- common/autotest_common.sh@640 -- # local es=0 00:28:51.693 13:14:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:28:51.693 13:14:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.693 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.693 13:14:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.693 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.693 13:14:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.693 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.693 13:14:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.693 13:14:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:51.693 13:14:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:28:51.693 [2024-06-11 13:14:10.479528] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:28:51.693 13:14:10 -- common/autotest_common.sh@643 -- # es=22 00:28:51.693 13:14:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:51.693 13:14:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:51.693 13:14:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:51.693 00:28:51.693 real 0m0.118s 00:28:51.693 user 0m0.062s 00:28:51.693 sys 0m0.053s 00:28:51.693 13:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.693 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.693 ************************************ 00:28:51.693 END TEST dd_no_input 00:28:51.693 ************************************ 00:28:51.951 13:14:10 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:28:51.951 13:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:51.951 13:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.951 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.951 ************************************ 00:28:51.951 START TEST dd_no_output 00:28:51.951 ************************************ 00:28:51.951 13:14:10 -- common/autotest_common.sh@1104 -- # no_output 00:28:51.951 13:14:10 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:51.951 13:14:10 -- common/autotest_common.sh@640 -- # local es=0 00:28:51.951 13:14:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:51.951 13:14:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.951 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.951 13:14:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.951 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.951 13:14:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.951 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.951 13:14:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.951 13:14:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:51.951 13:14:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:51.951 [2024-06-11 13:14:10.642586] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:28:51.951 13:14:10 -- common/autotest_common.sh@643 -- # es=22 00:28:51.951 13:14:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:51.951 13:14:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:51.951 13:14:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:51.951 00:28:51.951 real 0m0.117s 00:28:51.951 user 0m0.056s 00:28:51.951 sys 0m0.059s 00:28:51.951 13:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.951 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.951 ************************************ 00:28:51.951 END TEST dd_no_output 00:28:51.951 ************************************ 00:28:51.951 13:14:10 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:28:51.951 13:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:51.951 13:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.951 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:51.951 ************************************ 00:28:51.951 START TEST dd_wrong_blocksize 00:28:51.951 ************************************ 00:28:51.951 13:14:10 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:28:51.951 13:14:10 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:28:51.951 13:14:10 -- common/autotest_common.sh@640 -- # local es=0 00:28:51.951 13:14:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:28:51.952 13:14:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.952 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.952 13:14:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.952 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.952 13:14:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.952 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.952 13:14:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:51.952 13:14:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:51.952 13:14:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:28:52.209 [2024-06-11 13:14:10.803751] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:28:52.209 13:14:10 -- common/autotest_common.sh@643 -- # es=22 00:28:52.209 13:14:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:52.209 13:14:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:52.209 13:14:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:52.209 00:28:52.209 real 0m0.104s 00:28:52.209 user 0m0.058s 00:28:52.209 sys 0m0.043s 00:28:52.209 13:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.209 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:52.209 ************************************ 00:28:52.209 END TEST dd_wrong_blocksize 00:28:52.209 ************************************ 00:28:52.209 13:14:10 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:28:52.209 13:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:52.209 13:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:52.209 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:28:52.209 ************************************ 00:28:52.209 START TEST dd_smaller_blocksize 00:28:52.209 ************************************ 00:28:52.209 13:14:10 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:28:52.209 13:14:10 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:28:52.209 13:14:10 -- common/autotest_common.sh@640 -- # local es=0 00:28:52.209 13:14:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:28:52.209 13:14:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:52.209 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:52.209 13:14:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:52.209 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:52.209 13:14:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:52.209 13:14:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:52.209 13:14:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:52.209 13:14:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:52.209 13:14:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:28:52.209 [2024-06-11 13:14:10.957998] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:52.209 [2024-06-11 13:14:10.958318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140435 ] 00:28:52.467 [2024-06-11 13:14:11.128114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.726 [2024-06-11 13:14:11.353375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.292 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:28:53.292 [2024-06-11 13:14:11.939211] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:28:53.292 [2024-06-11 13:14:11.939584] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:53.859 [2024-06-11 13:14:12.532524] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:54.118 ************************************ 00:28:54.118 END TEST dd_smaller_blocksize 00:28:54.118 ************************************ 00:28:54.118 13:14:12 -- common/autotest_common.sh@643 -- # es=244 00:28:54.118 13:14:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:54.118 13:14:12 -- common/autotest_common.sh@652 -- # es=116 00:28:54.118 13:14:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:54.118 13:14:12 -- common/autotest_common.sh@660 -- # es=1 00:28:54.118 13:14:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:54.118 00:28:54.118 real 0m1.971s 00:28:54.118 user 0m1.402s 00:28:54.118 sys 0m0.466s 00:28:54.118 13:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.118 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:28:54.118 13:14:12 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:28:54.118 13:14:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:54.118 13:14:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:54.118 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:28:54.118 ************************************ 00:28:54.118 START TEST dd_invalid_count 00:28:54.118 ************************************ 00:28:54.118 13:14:12 -- common/autotest_common.sh@1104 -- # invalid_count 00:28:54.118 13:14:12 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:28:54.118 13:14:12 -- common/autotest_common.sh@640 -- # local es=0 00:28:54.118 13:14:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:28:54.118 13:14:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.118 13:14:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.118 13:14:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.118 13:14:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.118 13:14:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.118 13:14:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.118 13:14:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.118 13:14:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:54.118 13:14:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:28:54.377 [2024-06-11 13:14:12.984910] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:28:54.377 13:14:13 -- common/autotest_common.sh@643 -- # es=22 00:28:54.377 13:14:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:54.377 13:14:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:54.377 13:14:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:54.377 00:28:54.377 real 0m0.108s 00:28:54.377 user 0m0.064s 00:28:54.377 sys 0m0.042s 00:28:54.377 13:14:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.377 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:28:54.377 ************************************ 00:28:54.377 END TEST dd_invalid_count 00:28:54.377 ************************************ 00:28:54.377 13:14:13 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:28:54.377 13:14:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:54.377 13:14:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:54.377 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:28:54.377 ************************************ 00:28:54.377 START TEST dd_invalid_oflag 00:28:54.377 ************************************ 00:28:54.377 13:14:13 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:28:54.377 13:14:13 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:28:54.377 13:14:13 -- common/autotest_common.sh@640 -- # local es=0 00:28:54.377 13:14:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:28:54.377 13:14:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.377 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.377 13:14:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.377 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.377 13:14:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.377 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.377 13:14:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.377 13:14:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:54.377 13:14:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:28:54.377 [2024-06-11 13:14:13.134022] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:28:54.377 13:14:13 -- common/autotest_common.sh@643 -- # es=22 00:28:54.377 13:14:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:54.377 13:14:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:54.377 13:14:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:54.377 00:28:54.377 real 0m0.108s 00:28:54.377 user 0m0.069s 00:28:54.377 sys 0m0.038s 00:28:54.377 13:14:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.377 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:28:54.377 ************************************ 00:28:54.377 END TEST dd_invalid_oflag 00:28:54.377 ************************************ 00:28:54.637 13:14:13 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:28:54.637 13:14:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:54.637 13:14:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:54.637 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:28:54.637 ************************************ 00:28:54.637 START TEST dd_invalid_iflag 00:28:54.637 ************************************ 00:28:54.637 13:14:13 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:28:54.637 13:14:13 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:28:54.637 13:14:13 -- common/autotest_common.sh@640 -- # local es=0 00:28:54.637 13:14:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:28:54.637 13:14:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.637 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.637 13:14:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.637 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.637 13:14:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.637 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.637 13:14:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.637 13:14:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:54.637 13:14:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:28:54.637 [2024-06-11 13:14:13.278414] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:28:54.637 13:14:13 -- common/autotest_common.sh@643 -- # es=22 00:28:54.637 13:14:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:54.637 13:14:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:54.637 13:14:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:54.637 00:28:54.637 real 0m0.087s 00:28:54.637 user 0m0.037s 00:28:54.637 sys 0m0.048s 00:28:54.637 13:14:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.637 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:28:54.637 ************************************ 00:28:54.637 END TEST dd_invalid_iflag 00:28:54.637 ************************************ 00:28:54.637 13:14:13 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:28:54.637 13:14:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:54.637 13:14:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:54.637 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:28:54.637 ************************************ 00:28:54.637 START TEST dd_unknown_flag 00:28:54.637 ************************************ 00:28:54.637 13:14:13 -- common/autotest_common.sh@1104 -- # unknown_flag 00:28:54.637 13:14:13 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:28:54.637 13:14:13 -- common/autotest_common.sh@640 -- # local es=0 00:28:54.637 13:14:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:28:54.637 13:14:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.637 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.637 13:14:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.637 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.637 13:14:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.637 13:14:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:54.637 13:14:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:54.637 13:14:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:54.637 13:14:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:28:54.637 [2024-06-11 13:14:13.441298] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:54.637 [2024-06-11 13:14:13.441731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140558 ] 00:28:54.896 [2024-06-11 13:14:13.611661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.155 [2024-06-11 13:14:13.770509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.414 [2024-06-11 13:14:14.019698] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:28:55.414 [2024-06-11 13:14:14.020025] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:28:55.414 [2024-06-11 13:14:14.020157] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:28:55.414 [2024-06-11 13:14:14.020238] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:55.981 [2024-06-11 13:14:14.630360] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:56.241 ************************************ 00:28:56.241 END TEST dd_unknown_flag 00:28:56.241 ************************************ 00:28:56.241 13:14:14 -- common/autotest_common.sh@643 -- # es=234 00:28:56.241 13:14:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:56.241 13:14:14 -- common/autotest_common.sh@652 -- # es=106 00:28:56.241 13:14:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:56.241 13:14:14 -- common/autotest_common.sh@660 -- # es=1 00:28:56.241 13:14:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:56.241 00:28:56.241 real 0m1.603s 00:28:56.241 user 0m1.289s 00:28:56.241 sys 0m0.211s 00:28:56.241 13:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.241 13:14:14 -- common/autotest_common.sh@10 -- # set +x 00:28:56.241 13:14:15 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:28:56.241 13:14:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:56.241 13:14:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:56.241 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:28:56.241 ************************************ 00:28:56.241 START TEST dd_invalid_json 00:28:56.241 ************************************ 00:28:56.241 13:14:15 -- common/autotest_common.sh@1104 -- # invalid_json 00:28:56.241 13:14:15 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:28:56.241 13:14:15 -- common/autotest_common.sh@640 -- # local es=0 00:28:56.241 13:14:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:28:56.241 13:14:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:56.241 13:14:15 -- dd/negative_dd.sh@95 -- # : 00:28:56.241 13:14:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:56.241 13:14:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:56.241 13:14:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:56.241 13:14:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:56.241 13:14:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:56.241 13:14:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:56.241 13:14:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:56.241 13:14:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:28:56.241 [2024-06-11 13:14:15.082361] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:56.241 [2024-06-11 13:14:15.082751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140606 ] 00:28:56.500 [2024-06-11 13:14:15.235489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.759 [2024-06-11 13:14:15.415010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.759 [2024-06-11 13:14:15.415362] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:28:56.759 [2024-06-11 13:14:15.415509] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:56.759 [2024-06-11 13:14:15.415677] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:57.018 ************************************ 00:28:57.018 END TEST dd_invalid_json 00:28:57.018 ************************************ 00:28:57.018 13:14:15 -- common/autotest_common.sh@643 -- # es=234 00:28:57.018 13:14:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:57.018 13:14:15 -- common/autotest_common.sh@652 -- # es=106 00:28:57.018 13:14:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:57.018 13:14:15 -- common/autotest_common.sh@660 -- # es=1 00:28:57.018 13:14:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:57.018 00:28:57.018 real 0m0.718s 00:28:57.018 user 0m0.508s 00:28:57.018 sys 0m0.109s 00:28:57.018 13:14:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.018 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:28:57.018 00:28:57.018 real 0m5.924s 00:28:57.018 user 0m4.034s 00:28:57.018 sys 0m1.513s 00:28:57.018 ************************************ 00:28:57.018 END TEST spdk_dd_negative 00:28:57.018 ************************************ 00:28:57.018 13:14:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.018 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:28:57.018 ************************************ 00:28:57.018 END TEST spdk_dd 00:28:57.018 ************************************ 00:28:57.018 00:28:57.018 real 2m20.567s 00:28:57.018 user 1m50.486s 00:28:57.018 sys 0m20.095s 00:28:57.018 13:14:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.018 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:28:57.018 13:14:15 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:28:57.018 13:14:15 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:57.018 13:14:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:57.018 13:14:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:57.018 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:28:57.277 ************************************ 00:28:57.277 START TEST blockdev_nvme 00:28:57.277 ************************************ 00:28:57.277 13:14:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:57.277 * Looking for test storage... 00:28:57.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:57.277 13:14:15 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:57.277 13:14:15 -- bdev/nbd_common.sh@6 -- # set -e 00:28:57.277 13:14:15 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:57.277 13:14:15 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:57.277 13:14:15 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:57.277 13:14:15 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:57.277 13:14:15 -- bdev/blockdev.sh@18 -- # : 00:28:57.277 13:14:15 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:28:57.277 13:14:15 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:28:57.277 13:14:15 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:28:57.277 13:14:15 -- bdev/blockdev.sh@672 -- # uname -s 00:28:57.277 13:14:15 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:28:57.277 13:14:15 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:28:57.277 13:14:15 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:28:57.277 13:14:15 -- bdev/blockdev.sh@681 -- # crypto_device= 00:28:57.277 13:14:15 -- bdev/blockdev.sh@682 -- # dek= 00:28:57.277 13:14:15 -- bdev/blockdev.sh@683 -- # env_ctx= 00:28:57.277 13:14:15 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:28:57.277 13:14:15 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:28:57.277 13:14:15 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:28:57.277 13:14:15 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:28:57.277 13:14:15 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:28:57.277 13:14:15 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=140700 00:28:57.277 13:14:15 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:57.277 13:14:15 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:57.277 13:14:15 -- bdev/blockdev.sh@47 -- # waitforlisten 140700 00:28:57.277 13:14:15 -- common/autotest_common.sh@819 -- # '[' -z 140700 ']' 00:28:57.277 13:14:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.277 13:14:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:57.277 13:14:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.277 13:14:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:57.277 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:28:57.277 [2024-06-11 13:14:16.030483] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:57.277 [2024-06-11 13:14:16.030879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140700 ] 00:28:57.535 [2024-06-11 13:14:16.194472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.535 [2024-06-11 13:14:16.369172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:57.535 [2024-06-11 13:14:16.369624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.911 13:14:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:58.911 13:14:17 -- common/autotest_common.sh@852 -- # return 0 00:28:58.911 13:14:17 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:28:58.911 13:14:17 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:28:58.911 13:14:17 -- bdev/blockdev.sh@79 -- # local json 00:28:58.911 13:14:17 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:28:58.911 13:14:17 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:59.170 13:14:17 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:28:59.170 13:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.170 13:14:17 -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 13:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.170 13:14:17 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:28:59.170 13:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.170 13:14:17 -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 13:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.170 13:14:17 -- bdev/blockdev.sh@738 -- # cat 00:28:59.170 13:14:17 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:28:59.170 13:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.170 13:14:17 -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 13:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.170 13:14:17 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:28:59.170 13:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.170 13:14:17 -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 13:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.170 13:14:17 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:59.170 13:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.170 13:14:17 -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 13:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.170 13:14:17 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:28:59.170 13:14:17 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:28:59.170 13:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.170 13:14:17 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:28:59.170 13:14:17 -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 13:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.170 13:14:17 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:28:59.170 13:14:17 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c8bf96ba-4229-4390-a2f5-6d83aba6491b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c8bf96ba-4229-4390-a2f5-6d83aba6491b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:28:59.170 13:14:17 -- bdev/blockdev.sh@747 -- # jq -r .name 00:28:59.429 13:14:18 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:28:59.429 13:14:18 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:28:59.429 13:14:18 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:28:59.429 13:14:18 -- bdev/blockdev.sh@752 -- # killprocess 140700 00:28:59.429 13:14:18 -- common/autotest_common.sh@926 -- # '[' -z 140700 ']' 00:28:59.429 13:14:18 -- common/autotest_common.sh@930 -- # kill -0 140700 00:28:59.429 13:14:18 -- common/autotest_common.sh@931 -- # uname 00:28:59.429 13:14:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:59.429 13:14:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140700 00:28:59.429 13:14:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:59.429 13:14:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:59.429 13:14:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140700' 00:28:59.429 killing process with pid 140700 00:28:59.429 13:14:18 -- common/autotest_common.sh@945 -- # kill 140700 00:28:59.429 13:14:18 -- common/autotest_common.sh@950 -- # wait 140700 00:29:01.370 13:14:19 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:01.370 13:14:19 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:01.370 13:14:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:01.370 13:14:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.370 13:14:19 -- common/autotest_common.sh@10 -- # set +x 00:29:01.370 ************************************ 00:29:01.370 START TEST bdev_hello_world 00:29:01.370 ************************************ 00:29:01.370 13:14:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:01.370 [2024-06-11 13:14:19.959639] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:01.370 [2024-06-11 13:14:19.960002] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140807 ] 00:29:01.370 [2024-06-11 13:14:20.128029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.629 [2024-06-11 13:14:20.288643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.888 [2024-06-11 13:14:20.678810] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:01.888 [2024-06-11 13:14:20.679010] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:01.888 [2024-06-11 13:14:20.679072] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:01.888 [2024-06-11 13:14:20.681638] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:01.888 [2024-06-11 13:14:20.682233] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:01.888 [2024-06-11 13:14:20.682432] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:01.888 [2024-06-11 13:14:20.682719] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:01.888 00:29:01.888 [2024-06-11 13:14:20.682873] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:02.824 ************************************ 00:29:02.824 END TEST bdev_hello_world 00:29:02.824 ************************************ 00:29:02.824 00:29:02.824 real 0m1.629s 00:29:02.824 user 0m1.329s 00:29:02.824 sys 0m0.200s 00:29:02.824 13:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:02.824 13:14:21 -- common/autotest_common.sh@10 -- # set +x 00:29:02.824 13:14:21 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:02.824 13:14:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:02.824 13:14:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:02.824 13:14:21 -- common/autotest_common.sh@10 -- # set +x 00:29:02.824 ************************************ 00:29:02.824 START TEST bdev_bounds 00:29:02.824 ************************************ 00:29:02.824 13:14:21 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:02.824 13:14:21 -- bdev/blockdev.sh@288 -- # bdevio_pid=140845 00:29:02.824 13:14:21 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:02.824 13:14:21 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:02.824 Process bdevio pid: 140845 00:29:02.824 13:14:21 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 140845' 00:29:02.824 13:14:21 -- bdev/blockdev.sh@291 -- # waitforlisten 140845 00:29:02.824 13:14:21 -- common/autotest_common.sh@819 -- # '[' -z 140845 ']' 00:29:02.824 13:14:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.824 13:14:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:02.824 13:14:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.824 13:14:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:02.824 13:14:21 -- common/autotest_common.sh@10 -- # set +x 00:29:02.824 [2024-06-11 13:14:21.639948] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:02.824 [2024-06-11 13:14:21.640365] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140845 ] 00:29:03.083 [2024-06-11 13:14:21.807991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:03.342 [2024-06-11 13:14:21.988973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.342 [2024-06-11 13:14:21.989067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.342 [2024-06-11 13:14:21.989066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.909 13:14:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:03.909 13:14:22 -- common/autotest_common.sh@852 -- # return 0 00:29:03.909 13:14:22 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:03.909 I/O targets: 00:29:03.909 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:03.909 00:29:03.909 00:29:03.909 CUnit - A unit testing framework for C - Version 2.1-3 00:29:03.909 http://cunit.sourceforge.net/ 00:29:03.909 00:29:03.909 00:29:03.909 Suite: bdevio tests on: Nvme0n1 00:29:03.909 Test: blockdev write read block ...passed 00:29:03.910 Test: blockdev write zeroes read block ...passed 00:29:03.910 Test: blockdev write zeroes read no split ...passed 00:29:03.910 Test: blockdev write zeroes read split ...passed 00:29:03.910 Test: blockdev write zeroes read split partial ...passed 00:29:03.910 Test: blockdev reset ...[2024-06-11 13:14:22.696004] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:03.910 [2024-06-11 13:14:22.699333] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:03.910 passed 00:29:03.910 Test: blockdev write read 8 blocks ...passed 00:29:03.910 Test: blockdev write read size > 128k ...passed 00:29:03.910 Test: blockdev write read invalid size ...passed 00:29:03.910 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:03.910 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:03.910 Test: blockdev write read max offset ...passed 00:29:03.910 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:03.910 Test: blockdev writev readv 8 blocks ...passed 00:29:03.910 Test: blockdev writev readv 30 x 1block ...passed 00:29:03.910 Test: blockdev writev readv block ...passed 00:29:03.910 Test: blockdev writev readv size > 128k ...passed 00:29:03.910 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:03.910 Test: blockdev comparev and writev ...[2024-06-11 13:14:22.708966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0xb7c0d000 len:0x1000 00:29:03.910 [2024-06-11 13:14:22.709202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:03.910 passed 00:29:03.910 Test: blockdev nvme passthru rw ...passed 00:29:03.910 Test: blockdev nvme passthru vendor specific ...[2024-06-11 13:14:22.710609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:03.910 [2024-06-11 13:14:22.710816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:03.910 passed 00:29:03.910 Test: blockdev nvme admin passthru ...passed 00:29:03.910 Test: blockdev copy ...passed 00:29:03.910 00:29:03.910 Run Summary: Type Total Ran Passed Failed Inactive 00:29:03.910 suites 1 1 n/a 0 0 00:29:03.910 tests 23 23 23 0 0 00:29:03.910 asserts 152 152 152 0 n/a 00:29:03.910 00:29:03.910 Elapsed time = 0.186 seconds 00:29:03.910 0 00:29:03.910 13:14:22 -- bdev/blockdev.sh@293 -- # killprocess 140845 00:29:03.910 13:14:22 -- common/autotest_common.sh@926 -- # '[' -z 140845 ']' 00:29:03.910 13:14:22 -- common/autotest_common.sh@930 -- # kill -0 140845 00:29:03.910 13:14:22 -- common/autotest_common.sh@931 -- # uname 00:29:03.910 13:14:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:03.910 13:14:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140845 00:29:03.910 13:14:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:03.910 killing process with pid 140845 00:29:03.910 13:14:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:03.910 13:14:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140845' 00:29:03.910 13:14:22 -- common/autotest_common.sh@945 -- # kill 140845 00:29:03.910 13:14:22 -- common/autotest_common.sh@950 -- # wait 140845 00:29:05.287 ************************************ 00:29:05.287 END TEST bdev_bounds 00:29:05.287 ************************************ 00:29:05.287 13:14:23 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:05.287 00:29:05.287 real 0m2.139s 00:29:05.287 user 0m5.047s 00:29:05.287 sys 0m0.312s 00:29:05.287 13:14:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.287 13:14:23 -- common/autotest_common.sh@10 -- # set +x 00:29:05.287 13:14:23 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:05.287 13:14:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:05.287 13:14:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.287 13:14:23 -- common/autotest_common.sh@10 -- # set +x 00:29:05.287 ************************************ 00:29:05.287 START TEST bdev_nbd 00:29:05.287 ************************************ 00:29:05.287 13:14:23 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:05.287 13:14:23 -- bdev/blockdev.sh@298 -- # uname -s 00:29:05.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:05.287 13:14:23 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:05.287 13:14:23 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:05.287 13:14:23 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:05.287 13:14:23 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:29:05.287 13:14:23 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:05.287 13:14:23 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:05.287 13:14:23 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:05.287 13:14:23 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:29:05.287 13:14:23 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:05.287 13:14:23 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:05.287 13:14:23 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:29:05.287 13:14:23 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:05.287 13:14:23 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:29:05.287 13:14:23 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:05.287 13:14:23 -- bdev/blockdev.sh@316 -- # nbd_pid=140914 00:29:05.287 13:14:23 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:05.287 13:14:23 -- bdev/blockdev.sh@318 -- # waitforlisten 140914 /var/tmp/spdk-nbd.sock 00:29:05.287 13:14:23 -- common/autotest_common.sh@819 -- # '[' -z 140914 ']' 00:29:05.287 13:14:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:05.287 13:14:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:05.287 13:14:23 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:05.287 13:14:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:05.287 13:14:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:05.287 13:14:23 -- common/autotest_common.sh@10 -- # set +x 00:29:05.287 [2024-06-11 13:14:23.830465] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:05.287 [2024-06-11 13:14:23.830885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.287 [2024-06-11 13:14:23.988642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.546 [2024-06-11 13:14:24.234983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.113 13:14:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:06.113 13:14:24 -- common/autotest_common.sh@852 -- # return 0 00:29:06.113 13:14:24 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@24 -- # local i 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:06.113 13:14:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:06.371 13:14:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:06.371 13:14:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:06.371 13:14:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:06.371 13:14:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:06.371 13:14:25 -- common/autotest_common.sh@857 -- # local i 00:29:06.371 13:14:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:06.371 13:14:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:06.371 13:14:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:06.371 13:14:25 -- common/autotest_common.sh@861 -- # break 00:29:06.371 13:14:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:06.371 13:14:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:06.371 13:14:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:06.371 1+0 records in 00:29:06.371 1+0 records out 00:29:06.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770113 s, 5.3 MB/s 00:29:06.371 13:14:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.371 13:14:25 -- common/autotest_common.sh@874 -- # size=4096 00:29:06.371 13:14:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.371 13:14:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:06.371 13:14:25 -- common/autotest_common.sh@877 -- # return 0 00:29:06.371 13:14:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:06.371 13:14:25 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:06.371 13:14:25 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:06.629 { 00:29:06.629 "nbd_device": "/dev/nbd0", 00:29:06.629 "bdev_name": "Nvme0n1" 00:29:06.629 } 00:29:06.629 ]' 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:06.629 { 00:29:06.629 "nbd_device": "/dev/nbd0", 00:29:06.629 "bdev_name": "Nvme0n1" 00:29:06.629 } 00:29:06.629 ]' 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@51 -- # local i 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:06.629 13:14:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@41 -- # break 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@45 -- # return 0 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:06.888 13:14:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@65 -- # true 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@65 -- # count=0 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@122 -- # count=0 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@127 -- # return 0 00:29:07.146 13:14:25 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:07.146 13:14:25 -- bdev/nbd_common.sh@12 -- # local i 00:29:07.147 13:14:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:07.147 13:14:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:07.147 13:14:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:07.406 /dev/nbd0 00:29:07.406 13:14:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:07.406 13:14:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:07.406 13:14:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:07.406 13:14:26 -- common/autotest_common.sh@857 -- # local i 00:29:07.406 13:14:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:07.406 13:14:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:07.406 13:14:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:07.406 13:14:26 -- common/autotest_common.sh@861 -- # break 00:29:07.406 13:14:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:07.406 13:14:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:07.406 13:14:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:07.406 1+0 records in 00:29:07.406 1+0 records out 00:29:07.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654032 s, 6.3 MB/s 00:29:07.406 13:14:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:07.406 13:14:26 -- common/autotest_common.sh@874 -- # size=4096 00:29:07.406 13:14:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:07.406 13:14:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:07.406 13:14:26 -- common/autotest_common.sh@877 -- # return 0 00:29:07.406 13:14:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:07.406 13:14:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:07.406 13:14:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:07.406 13:14:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:07.406 13:14:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:07.665 { 00:29:07.665 "nbd_device": "/dev/nbd0", 00:29:07.665 "bdev_name": "Nvme0n1" 00:29:07.665 } 00:29:07.665 ]' 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:07.665 { 00:29:07.665 "nbd_device": "/dev/nbd0", 00:29:07.665 "bdev_name": "Nvme0n1" 00:29:07.665 } 00:29:07.665 ]' 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@65 -- # count=1 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@95 -- # count=1 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:07.665 256+0 records in 00:29:07.665 256+0 records out 00:29:07.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00922327 s, 114 MB/s 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:07.665 13:14:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:07.924 256+0 records in 00:29:07.924 256+0 records out 00:29:07.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0643821 s, 16.3 MB/s 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@51 -- # local i 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:07.924 13:14:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@41 -- # break 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@45 -- # return 0 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:08.182 13:14:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@65 -- # true 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@65 -- # count=0 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@104 -- # count=0 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@109 -- # return 0 00:29:08.440 13:14:27 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:08.440 13:14:27 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:08.699 malloc_lvol_verify 00:29:08.699 13:14:27 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:08.957 32512f8d-4744-4351-bd04-d2cea12c4cc8 00:29:08.957 13:14:27 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:09.216 59c32027-b098-4dff-a51b-6389a1b57a9a 00:29:09.216 13:14:27 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:09.216 /dev/nbd0 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:09.216 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:09.216 00:29:09.216 Allocating group tables: 0/1 done 00:29:09.216 Writing inode tables: 0/1 done 00:29:09.216 Writing superblocks and filesystem accounting information: 0/1 done 00:29:09.216 00:29:09.216 mke2fs 1.45.5 (07-Jan-2020) 00:29:09.216 00:29:09.216 Filesystem too small for a journal 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@51 -- # local i 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:09.216 13:14:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@41 -- # break 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@45 -- # return 0 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:09.475 13:14:28 -- bdev/nbd_common.sh@147 -- # return 0 00:29:09.475 13:14:28 -- bdev/blockdev.sh@324 -- # killprocess 140914 00:29:09.475 13:14:28 -- common/autotest_common.sh@926 -- # '[' -z 140914 ']' 00:29:09.475 13:14:28 -- common/autotest_common.sh@930 -- # kill -0 140914 00:29:09.475 13:14:28 -- common/autotest_common.sh@931 -- # uname 00:29:09.733 13:14:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:09.733 13:14:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140914 00:29:09.733 13:14:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:09.733 13:14:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:09.733 13:14:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140914' 00:29:09.733 killing process with pid 140914 00:29:09.733 13:14:28 -- common/autotest_common.sh@945 -- # kill 140914 00:29:09.733 13:14:28 -- common/autotest_common.sh@950 -- # wait 140914 00:29:10.668 ************************************ 00:29:10.668 END TEST bdev_nbd 00:29:10.668 ************************************ 00:29:10.668 13:14:29 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:10.668 00:29:10.668 real 0m5.468s 00:29:10.668 user 0m7.885s 00:29:10.668 sys 0m1.183s 00:29:10.668 13:14:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:10.668 13:14:29 -- common/autotest_common.sh@10 -- # set +x 00:29:10.668 skipping fio tests on NVMe due to multi-ns failures. 00:29:10.668 13:14:29 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:10.668 13:14:29 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:29:10.668 13:14:29 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:10.668 13:14:29 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:10.668 13:14:29 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:10.668 13:14:29 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:10.668 13:14:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:10.668 13:14:29 -- common/autotest_common.sh@10 -- # set +x 00:29:10.668 ************************************ 00:29:10.668 START TEST bdev_verify 00:29:10.668 ************************************ 00:29:10.668 13:14:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:10.668 [2024-06-11 13:14:29.346397] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:10.668 [2024-06-11 13:14:29.346561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141120 ] 00:29:10.668 [2024-06-11 13:14:29.499146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:10.926 [2024-06-11 13:14:29.676046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.926 [2024-06-11 13:14:29.676078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.507 Running I/O for 5 seconds... 00:29:16.794 00:29:16.794 Latency(us) 00:29:16.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.794 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.794 Verification LBA range: start 0x0 length 0xa0000 00:29:16.794 Nvme0n1 : 5.01 14642.72 57.20 0.00 0.00 8705.17 513.86 12690.15 00:29:16.794 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:16.794 Verification LBA range: start 0xa0000 length 0xa0000 00:29:16.794 Nvme0n1 : 5.01 14753.87 57.63 0.00 0.00 8641.74 476.63 16562.73 00:29:16.794 =================================================================================================================== 00:29:16.794 Total : 29396.59 114.83 0.00 0.00 8673.33 476.63 16562.73 00:29:24.904 00:29:24.904 real 0m13.436s 00:29:24.904 user 0m25.668s 00:29:24.904 sys 0m0.359s 00:29:24.904 13:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.904 ************************************ 00:29:24.904 END TEST bdev_verify 00:29:24.904 ************************************ 00:29:24.904 13:14:42 -- common/autotest_common.sh@10 -- # set +x 00:29:24.904 13:14:42 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:24.904 13:14:42 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:24.904 13:14:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.904 13:14:42 -- common/autotest_common.sh@10 -- # set +x 00:29:24.904 ************************************ 00:29:24.904 START TEST bdev_verify_big_io 00:29:24.904 ************************************ 00:29:24.904 13:14:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:24.904 [2024-06-11 13:14:42.846422] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:24.904 [2024-06-11 13:14:42.846663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141294 ] 00:29:24.904 [2024-06-11 13:14:43.018813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:24.904 [2024-06-11 13:14:43.211363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.904 [2024-06-11 13:14:43.211376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.904 Running I/O for 5 seconds... 00:29:30.216 00:29:30.216 Latency(us) 00:29:30.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.216 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:30.216 Verification LBA range: start 0x0 length 0xa000 00:29:30.216 Nvme0n1 : 5.03 1988.35 124.27 0.00 0.00 63492.90 703.77 108670.60 00:29:30.216 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:30.216 Verification LBA range: start 0xa000 length 0xa000 00:29:30.216 Nvme0n1 : 5.03 2628.84 164.30 0.00 0.00 48112.02 636.74 64344.44 00:29:30.216 =================================================================================================================== 00:29:30.216 Total : 4617.19 288.57 0.00 0.00 54734.27 636.74 108670.60 00:29:31.153 00:29:31.153 real 0m7.187s 00:29:31.153 user 0m13.244s 00:29:31.153 sys 0m0.277s 00:29:31.153 13:14:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:31.153 ************************************ 00:29:31.153 END TEST bdev_verify_big_io 00:29:31.153 ************************************ 00:29:31.153 13:14:49 -- common/autotest_common.sh@10 -- # set +x 00:29:31.411 13:14:50 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:31.411 13:14:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:31.411 13:14:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:31.411 13:14:50 -- common/autotest_common.sh@10 -- # set +x 00:29:31.411 ************************************ 00:29:31.411 START TEST bdev_write_zeroes 00:29:31.411 ************************************ 00:29:31.411 13:14:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:31.411 [2024-06-11 13:14:50.084406] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:31.411 [2024-06-11 13:14:50.084600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141407 ] 00:29:31.411 [2024-06-11 13:14:50.249292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.670 [2024-06-11 13:14:50.425958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.237 Running I/O for 1 seconds... 00:29:33.170 00:29:33.170 Latency(us) 00:29:33.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.170 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:33.170 Nvme0n1 : 1.00 65435.03 255.61 0.00 0.00 1950.94 584.61 10783.65 00:29:33.170 =================================================================================================================== 00:29:33.170 Total : 65435.03 255.61 0.00 0.00 1950.94 584.61 10783.65 00:29:34.106 00:29:34.106 real 0m2.726s 00:29:34.106 user 0m2.391s 00:29:34.106 sys 0m0.236s 00:29:34.106 ************************************ 00:29:34.106 END TEST bdev_write_zeroes 00:29:34.106 ************************************ 00:29:34.106 13:14:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:34.106 13:14:52 -- common/autotest_common.sh@10 -- # set +x 00:29:34.106 13:14:52 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:34.106 13:14:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:34.106 13:14:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:34.106 13:14:52 -- common/autotest_common.sh@10 -- # set +x 00:29:34.106 ************************************ 00:29:34.106 START TEST bdev_json_nonenclosed 00:29:34.106 ************************************ 00:29:34.106 13:14:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:34.106 [2024-06-11 13:14:52.852616] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:34.106 [2024-06-11 13:14:52.852774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141475 ] 00:29:34.365 [2024-06-11 13:14:53.004299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.365 [2024-06-11 13:14:53.165352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.365 [2024-06-11 13:14:53.165550] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:34.365 [2024-06-11 13:14:53.165589] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:34.932 00:29:34.932 real 0m0.678s 00:29:34.932 user 0m0.450s 00:29:34.932 sys 0m0.128s 00:29:34.932 13:14:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:34.932 ************************************ 00:29:34.932 END TEST bdev_json_nonenclosed 00:29:34.932 ************************************ 00:29:34.932 13:14:53 -- common/autotest_common.sh@10 -- # set +x 00:29:34.932 13:14:53 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:34.932 13:14:53 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:34.932 13:14:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:34.932 13:14:53 -- common/autotest_common.sh@10 -- # set +x 00:29:34.932 ************************************ 00:29:34.932 START TEST bdev_json_nonarray 00:29:34.932 ************************************ 00:29:34.932 13:14:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:34.932 [2024-06-11 13:14:53.581635] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:34.932 [2024-06-11 13:14:53.581834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141504 ] 00:29:34.932 [2024-06-11 13:14:53.737155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.190 [2024-06-11 13:14:53.896708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.190 [2024-06-11 13:14:53.897161] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:35.190 [2024-06-11 13:14:53.897318] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:35.449 ************************************ 00:29:35.449 END TEST bdev_json_nonarray 00:29:35.449 ************************************ 00:29:35.449 00:29:35.449 real 0m0.691s 00:29:35.449 user 0m0.470s 00:29:35.449 sys 0m0.121s 00:29:35.449 13:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.449 13:14:54 -- common/autotest_common.sh@10 -- # set +x 00:29:35.449 13:14:54 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:29:35.449 13:14:54 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:29:35.449 13:14:54 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:29:35.449 13:14:54 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:29:35.449 13:14:54 -- bdev/blockdev.sh@809 -- # cleanup 00:29:35.449 13:14:54 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:35.449 13:14:54 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:35.449 13:14:54 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:29:35.449 13:14:54 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:29:35.449 13:14:54 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:29:35.449 13:14:54 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:29:35.449 ************************************ 00:29:35.449 END TEST blockdev_nvme 00:29:35.449 ************************************ 00:29:35.449 00:29:35.449 real 0m38.393s 00:29:35.449 user 1m1.054s 00:29:35.449 sys 0m3.515s 00:29:35.449 13:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.449 13:14:54 -- common/autotest_common.sh@10 -- # set +x 00:29:35.707 13:14:54 -- spdk/autotest.sh@219 -- # uname -s 00:29:35.707 13:14:54 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:29:35.707 13:14:54 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:35.707 13:14:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:35.707 13:14:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:35.707 13:14:54 -- common/autotest_common.sh@10 -- # set +x 00:29:35.707 ************************************ 00:29:35.707 START TEST blockdev_nvme_gpt 00:29:35.707 ************************************ 00:29:35.707 13:14:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:35.707 * Looking for test storage... 00:29:35.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:35.707 13:14:54 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:35.707 13:14:54 -- bdev/nbd_common.sh@6 -- # set -e 00:29:35.707 13:14:54 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:35.707 13:14:54 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:35.707 13:14:54 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:35.707 13:14:54 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:35.707 13:14:54 -- bdev/blockdev.sh@18 -- # : 00:29:35.707 13:14:54 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:35.707 13:14:54 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:35.707 13:14:54 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:35.707 13:14:54 -- bdev/blockdev.sh@672 -- # uname -s 00:29:35.707 13:14:54 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:35.707 13:14:54 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:35.707 13:14:54 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:29:35.707 13:14:54 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:35.707 13:14:54 -- bdev/blockdev.sh@682 -- # dek= 00:29:35.707 13:14:54 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:35.707 13:14:54 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:35.707 13:14:54 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:35.707 13:14:54 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:29:35.708 13:14:54 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:29:35.708 13:14:54 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:35.708 13:14:54 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=141589 00:29:35.708 13:14:54 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:35.708 13:14:54 -- bdev/blockdev.sh@47 -- # waitforlisten 141589 00:29:35.708 13:14:54 -- common/autotest_common.sh@819 -- # '[' -z 141589 ']' 00:29:35.708 13:14:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.708 13:14:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:35.708 13:14:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.708 13:14:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:35.708 13:14:54 -- common/autotest_common.sh@10 -- # set +x 00:29:35.708 13:14:54 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:35.708 [2024-06-11 13:14:54.456388] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:35.708 [2024-06-11 13:14:54.456865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141589 ] 00:29:35.966 [2024-06-11 13:14:54.606237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.966 [2024-06-11 13:14:54.769110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:35.966 [2024-06-11 13:14:54.769586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.341 13:14:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:37.341 13:14:56 -- common/autotest_common.sh@852 -- # return 0 00:29:37.341 13:14:56 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:37.341 13:14:56 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:29:37.341 13:14:56 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:37.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:37.599 Waiting for block devices as requested 00:29:37.599 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:37.857 13:14:56 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:29:37.857 13:14:56 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:29:37.857 13:14:56 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:29:37.857 13:14:56 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:29:37.857 13:14:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:29:37.857 13:14:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:29:37.857 13:14:56 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:29:37.857 13:14:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:37.857 13:14:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:29:37.857 13:14:56 -- bdev/blockdev.sh@105 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:29:37.857 13:14:56 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:29:37.857 13:14:56 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:29:37.857 13:14:56 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:37.857 13:14:56 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:29:37.857 13:14:56 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:29:37.857 13:14:56 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:29:37.857 13:14:56 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:37.857 BYT; 00:29:37.857 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:37.857 13:14:56 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:37.857 BYT; 00:29:37.857 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:37.857 13:14:56 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:29:37.857 13:14:56 -- bdev/blockdev.sh@114 -- # break 00:29:37.857 13:14:56 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:29:37.857 13:14:56 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:37.857 13:14:56 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:37.857 13:14:56 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:38.792 13:14:57 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:29:38.792 13:14:57 -- scripts/common.sh@410 -- # local spdk_guid 00:29:38.792 13:14:57 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:38.792 13:14:57 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:38.792 13:14:57 -- scripts/common.sh@415 -- # IFS='()' 00:29:38.792 13:14:57 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:29:38.792 13:14:57 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:38.792 13:14:57 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:38.792 13:14:57 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:38.792 13:14:57 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:38.792 13:14:57 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:38.792 13:14:57 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:29:38.792 13:14:57 -- scripts/common.sh@422 -- # local spdk_guid 00:29:38.792 13:14:57 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:38.792 13:14:57 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:38.792 13:14:57 -- scripts/common.sh@427 -- # IFS='()' 00:29:38.792 13:14:57 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:29:38.792 13:14:57 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:38.792 13:14:57 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:38.792 13:14:57 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:38.792 13:14:57 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:38.792 13:14:57 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:38.792 13:14:57 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:39.726 The operation has completed successfully. 00:29:39.726 13:14:58 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:40.670 The operation has completed successfully. 00:29:40.670 13:14:59 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:40.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:41.202 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:42.138 13:15:00 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:29:42.138 13:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.138 13:15:00 -- common/autotest_common.sh@10 -- # set +x 00:29:42.138 [] 00:29:42.138 13:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.138 13:15:00 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:29:42.138 13:15:00 -- bdev/blockdev.sh@79 -- # local json 00:29:42.138 13:15:00 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:42.138 13:15:00 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:42.138 13:15:00 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:42.138 13:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.138 13:15:00 -- common/autotest_common.sh@10 -- # set +x 00:29:42.138 13:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.138 13:15:00 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:42.138 13:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.138 13:15:00 -- common/autotest_common.sh@10 -- # set +x 00:29:42.138 13:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.138 13:15:00 -- bdev/blockdev.sh@738 -- # cat 00:29:42.138 13:15:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:42.138 13:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.138 13:15:00 -- common/autotest_common.sh@10 -- # set +x 00:29:42.397 13:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.397 13:15:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:42.397 13:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.397 13:15:00 -- common/autotest_common.sh@10 -- # set +x 00:29:42.397 13:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.397 13:15:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:42.397 13:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.397 13:15:01 -- common/autotest_common.sh@10 -- # set +x 00:29:42.397 13:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.397 13:15:01 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:42.397 13:15:01 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:42.397 13:15:01 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:42.397 13:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.397 13:15:01 -- common/autotest_common.sh@10 -- # set +x 00:29:42.397 13:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:42.397 13:15:01 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:42.397 13:15:01 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:42.397 13:15:01 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:29:42.397 13:15:01 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:42.397 13:15:01 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:29:42.397 13:15:01 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:42.397 13:15:01 -- bdev/blockdev.sh@752 -- # killprocess 141589 00:29:42.397 13:15:01 -- common/autotest_common.sh@926 -- # '[' -z 141589 ']' 00:29:42.397 13:15:01 -- common/autotest_common.sh@930 -- # kill -0 141589 00:29:42.397 13:15:01 -- common/autotest_common.sh@931 -- # uname 00:29:42.397 13:15:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:42.397 13:15:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141589 00:29:42.397 13:15:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:42.397 killing process with pid 141589 00:29:42.397 13:15:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:42.397 13:15:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141589' 00:29:42.397 13:15:01 -- common/autotest_common.sh@945 -- # kill 141589 00:29:42.397 13:15:01 -- common/autotest_common.sh@950 -- # wait 141589 00:29:44.298 13:15:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:44.298 13:15:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:29:44.298 13:15:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:44.298 13:15:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:44.298 13:15:02 -- common/autotest_common.sh@10 -- # set +x 00:29:44.298 ************************************ 00:29:44.298 START TEST bdev_hello_world 00:29:44.298 ************************************ 00:29:44.298 13:15:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:29:44.298 [2024-06-11 13:15:03.016995] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:44.298 [2024-06-11 13:15:03.017149] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142151 ] 00:29:44.556 [2024-06-11 13:15:03.178481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.556 [2024-06-11 13:15:03.396678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.122 [2024-06-11 13:15:03.840595] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:45.122 [2024-06-11 13:15:03.840689] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:29:45.122 [2024-06-11 13:15:03.840722] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:45.122 [2024-06-11 13:15:03.843782] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:45.122 [2024-06-11 13:15:03.844579] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:45.122 [2024-06-11 13:15:03.844670] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:45.122 [2024-06-11 13:15:03.845107] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:45.122 00:29:45.122 [2024-06-11 13:15:03.845183] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:46.498 00:29:46.498 real 0m2.037s 00:29:46.498 user 0m1.685s 00:29:46.498 sys 0m0.252s 00:29:46.498 13:15:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.498 13:15:04 -- common/autotest_common.sh@10 -- # set +x 00:29:46.498 ************************************ 00:29:46.498 END TEST bdev_hello_world 00:29:46.498 ************************************ 00:29:46.498 13:15:05 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:46.498 13:15:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:46.498 13:15:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.498 13:15:05 -- common/autotest_common.sh@10 -- # set +x 00:29:46.498 ************************************ 00:29:46.498 START TEST bdev_bounds 00:29:46.498 ************************************ 00:29:46.498 13:15:05 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:46.498 13:15:05 -- bdev/blockdev.sh@288 -- # bdevio_pid=142201 00:29:46.498 13:15:05 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:46.498 13:15:05 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:46.498 Process bdevio pid: 142201 00:29:46.498 13:15:05 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 142201' 00:29:46.498 13:15:05 -- bdev/blockdev.sh@291 -- # waitforlisten 142201 00:29:46.498 13:15:05 -- common/autotest_common.sh@819 -- # '[' -z 142201 ']' 00:29:46.498 13:15:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.498 13:15:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:46.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.498 13:15:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.498 13:15:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:46.498 13:15:05 -- common/autotest_common.sh@10 -- # set +x 00:29:46.498 [2024-06-11 13:15:05.131821] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:46.498 [2024-06-11 13:15:05.132115] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142201 ] 00:29:46.756 [2024-06-11 13:15:05.345040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:46.756 [2024-06-11 13:15:05.585063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.756 [2024-06-11 13:15:05.585224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.756 [2024-06-11 13:15:05.585221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.323 13:15:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:47.323 13:15:06 -- common/autotest_common.sh@852 -- # return 0 00:29:47.323 13:15:06 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:47.581 I/O targets: 00:29:47.581 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:29:47.581 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:29:47.581 00:29:47.581 00:29:47.581 CUnit - A unit testing framework for C - Version 2.1-3 00:29:47.581 http://cunit.sourceforge.net/ 00:29:47.581 00:29:47.581 00:29:47.581 Suite: bdevio tests on: Nvme0n1p2 00:29:47.581 Test: blockdev write read block ...passed 00:29:47.581 Test: blockdev write zeroes read block ...passed 00:29:47.581 Test: blockdev write zeroes read no split ...passed 00:29:47.581 Test: blockdev write zeroes read split ...passed 00:29:47.581 Test: blockdev write zeroes read split partial ...passed 00:29:47.581 Test: blockdev reset ...[2024-06-11 13:15:06.306539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:47.581 [2024-06-11 13:15:06.309942] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:47.581 passed 00:29:47.581 Test: blockdev write read 8 blocks ...passed 00:29:47.581 Test: blockdev write read size > 128k ...passed 00:29:47.581 Test: blockdev write read invalid size ...passed 00:29:47.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:47.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:47.581 Test: blockdev write read max offset ...passed 00:29:47.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:47.581 Test: blockdev writev readv 8 blocks ...passed 00:29:47.581 Test: blockdev writev readv 30 x 1block ...passed 00:29:47.581 Test: blockdev writev readv block ...passed 00:29:47.581 Test: blockdev writev readv size > 128k ...passed 00:29:47.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:47.581 Test: blockdev comparev and writev ...[2024-06-11 13:15:06.320328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x118e0b000 len:0x1000 00:29:47.581 [2024-06-11 13:15:06.320533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:47.581 passed 00:29:47.581 Test: blockdev nvme passthru rw ...passed 00:29:47.581 Test: blockdev nvme passthru vendor specific ...passed 00:29:47.581 Test: blockdev nvme admin passthru ...passed 00:29:47.581 Test: blockdev copy ...passed 00:29:47.581 Suite: bdevio tests on: Nvme0n1p1 00:29:47.581 Test: blockdev write read block ...passed 00:29:47.581 Test: blockdev write zeroes read block ...passed 00:29:47.581 Test: blockdev write zeroes read no split ...passed 00:29:47.581 Test: blockdev write zeroes read split ...passed 00:29:47.581 Test: blockdev write zeroes read split partial ...passed 00:29:47.581 Test: blockdev reset ...[2024-06-11 13:15:06.374785] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:47.581 [2024-06-11 13:15:06.377901] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:47.581 passed 00:29:47.581 Test: blockdev write read 8 blocks ...passed 00:29:47.581 Test: blockdev write read size > 128k ...passed 00:29:47.581 Test: blockdev write read invalid size ...passed 00:29:47.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:47.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:47.581 Test: blockdev write read max offset ...passed 00:29:47.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:47.581 Test: blockdev writev readv 8 blocks ...passed 00:29:47.581 Test: blockdev writev readv 30 x 1block ...passed 00:29:47.581 Test: blockdev writev readv block ...passed 00:29:47.581 Test: blockdev writev readv size > 128k ...passed 00:29:47.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:47.581 Test: blockdev comparev and writev ...[2024-06-11 13:15:06.387697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x118e0d000 len:0x1000 00:29:47.581 [2024-06-11 13:15:06.387887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:47.581 passed 00:29:47.581 Test: blockdev nvme passthru rw ...passed 00:29:47.581 Test: blockdev nvme passthru vendor specific ...passed 00:29:47.581 Test: blockdev nvme admin passthru ...passed 00:29:47.581 Test: blockdev copy ...passed 00:29:47.581 00:29:47.581 Run Summary: Type Total Ran Passed Failed Inactive 00:29:47.581 suites 2 2 n/a 0 0 00:29:47.581 tests 46 46 46 0 0 00:29:47.581 asserts 284 284 284 0 n/a 00:29:47.581 00:29:47.581 Elapsed time = 0.384 seconds 00:29:47.581 0 00:29:47.581 13:15:06 -- bdev/blockdev.sh@293 -- # killprocess 142201 00:29:47.581 13:15:06 -- common/autotest_common.sh@926 -- # '[' -z 142201 ']' 00:29:47.581 13:15:06 -- common/autotest_common.sh@930 -- # kill -0 142201 00:29:47.581 13:15:06 -- common/autotest_common.sh@931 -- # uname 00:29:47.581 13:15:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:47.582 13:15:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142201 00:29:47.840 13:15:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:47.840 13:15:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:47.840 13:15:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142201' 00:29:47.840 killing process with pid 142201 00:29:47.840 13:15:06 -- common/autotest_common.sh@945 -- # kill 142201 00:29:47.840 13:15:06 -- common/autotest_common.sh@950 -- # wait 142201 00:29:48.778 13:15:07 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:48.778 00:29:48.778 real 0m2.411s 00:29:48.778 user 0m5.411s 00:29:48.778 sys 0m0.451s 00:29:48.778 ************************************ 00:29:48.778 13:15:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.778 13:15:07 -- common/autotest_common.sh@10 -- # set +x 00:29:48.778 END TEST bdev_bounds 00:29:48.778 ************************************ 00:29:48.778 13:15:07 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:29:48.778 13:15:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:48.778 13:15:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:48.778 13:15:07 -- common/autotest_common.sh@10 -- # set +x 00:29:48.778 ************************************ 00:29:48.778 START TEST bdev_nbd 00:29:48.778 ************************************ 00:29:48.778 13:15:07 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:29:48.778 13:15:07 -- bdev/blockdev.sh@298 -- # uname -s 00:29:48.778 13:15:07 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:48.778 13:15:07 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:48.778 13:15:07 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:48.778 13:15:07 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:29:48.778 13:15:07 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:48.778 13:15:07 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:29:48.778 13:15:07 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:48.778 13:15:07 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:29:48.778 13:15:07 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:48.778 13:15:07 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:29:48.778 13:15:07 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:29:48.778 13:15:07 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:48.778 13:15:07 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:29:48.778 13:15:07 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:48.778 13:15:07 -- bdev/blockdev.sh@316 -- # nbd_pid=142264 00:29:48.778 13:15:07 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:48.778 13:15:07 -- bdev/blockdev.sh@318 -- # waitforlisten 142264 /var/tmp/spdk-nbd.sock 00:29:48.778 13:15:07 -- common/autotest_common.sh@819 -- # '[' -z 142264 ']' 00:29:48.778 13:15:07 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:48.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:48.778 13:15:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:48.778 13:15:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:48.778 13:15:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:48.778 13:15:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:48.778 13:15:07 -- common/autotest_common.sh@10 -- # set +x 00:29:48.778 [2024-06-11 13:15:07.596049] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:48.778 [2024-06-11 13:15:07.596250] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.037 [2024-06-11 13:15:07.769469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.296 [2024-06-11 13:15:07.968820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.861 13:15:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:49.861 13:15:08 -- common/autotest_common.sh@852 -- # return 0 00:29:49.861 13:15:08 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@24 -- # local i 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:49.861 13:15:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:29:50.119 13:15:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:50.119 13:15:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:50.119 13:15:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:50.119 13:15:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:50.119 13:15:08 -- common/autotest_common.sh@857 -- # local i 00:29:50.119 13:15:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:50.119 13:15:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:50.119 13:15:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:50.119 13:15:08 -- common/autotest_common.sh@861 -- # break 00:29:50.119 13:15:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:50.119 13:15:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:50.119 13:15:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:50.119 1+0 records in 00:29:50.119 1+0 records out 00:29:50.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502055 s, 8.2 MB/s 00:29:50.119 13:15:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.119 13:15:08 -- common/autotest_common.sh@874 -- # size=4096 00:29:50.119 13:15:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.119 13:15:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:50.119 13:15:08 -- common/autotest_common.sh@877 -- # return 0 00:29:50.119 13:15:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:50.119 13:15:08 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:50.119 13:15:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:29:50.377 13:15:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:50.377 13:15:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:50.377 13:15:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:50.377 13:15:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:29:50.377 13:15:09 -- common/autotest_common.sh@857 -- # local i 00:29:50.377 13:15:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:50.377 13:15:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:50.377 13:15:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:29:50.377 13:15:09 -- common/autotest_common.sh@861 -- # break 00:29:50.377 13:15:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:50.377 13:15:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:50.377 13:15:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:50.377 1+0 records in 00:29:50.377 1+0 records out 00:29:50.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538591 s, 7.6 MB/s 00:29:50.377 13:15:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.377 13:15:09 -- common/autotest_common.sh@874 -- # size=4096 00:29:50.377 13:15:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.377 13:15:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:50.377 13:15:09 -- common/autotest_common.sh@877 -- # return 0 00:29:50.377 13:15:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:50.377 13:15:09 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:50.377 13:15:09 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:50.635 { 00:29:50.635 "nbd_device": "/dev/nbd0", 00:29:50.635 "bdev_name": "Nvme0n1p1" 00:29:50.635 }, 00:29:50.635 { 00:29:50.635 "nbd_device": "/dev/nbd1", 00:29:50.635 "bdev_name": "Nvme0n1p2" 00:29:50.635 } 00:29:50.635 ]' 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:50.635 { 00:29:50.635 "nbd_device": "/dev/nbd0", 00:29:50.635 "bdev_name": "Nvme0n1p1" 00:29:50.635 }, 00:29:50.635 { 00:29:50.635 "nbd_device": "/dev/nbd1", 00:29:50.635 "bdev_name": "Nvme0n1p2" 00:29:50.635 } 00:29:50.635 ]' 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@51 -- # local i 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.635 13:15:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.893 13:15:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@41 -- # break 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@41 -- # break 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.151 13:15:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@65 -- # true 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@65 -- # count=0 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@122 -- # count=0 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@127 -- # return 0 00:29:51.409 13:15:10 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:29:51.409 13:15:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.410 13:15:10 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:51.410 13:15:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:51.410 13:15:10 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:51.410 13:15:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:51.410 13:15:10 -- bdev/nbd_common.sh@12 -- # local i 00:29:51.410 13:15:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:51.410 13:15:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:51.410 13:15:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:29:51.670 /dev/nbd0 00:29:51.670 13:15:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:51.670 13:15:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:51.670 13:15:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:51.670 13:15:10 -- common/autotest_common.sh@857 -- # local i 00:29:51.670 13:15:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:51.670 13:15:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:51.670 13:15:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:51.670 13:15:10 -- common/autotest_common.sh@861 -- # break 00:29:51.670 13:15:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:51.670 13:15:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:51.670 13:15:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:51.670 1+0 records in 00:29:51.670 1+0 records out 00:29:51.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522632 s, 7.8 MB/s 00:29:51.670 13:15:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.670 13:15:10 -- common/autotest_common.sh@874 -- # size=4096 00:29:51.671 13:15:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.671 13:15:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:51.671 13:15:10 -- common/autotest_common.sh@877 -- # return 0 00:29:51.671 13:15:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:51.671 13:15:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:51.671 13:15:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:29:51.935 /dev/nbd1 00:29:51.935 13:15:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:51.935 13:15:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:51.935 13:15:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:29:51.935 13:15:10 -- common/autotest_common.sh@857 -- # local i 00:29:51.935 13:15:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:51.935 13:15:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:51.935 13:15:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:29:51.935 13:15:10 -- common/autotest_common.sh@861 -- # break 00:29:51.935 13:15:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:51.936 13:15:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:51.936 13:15:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:51.936 1+0 records in 00:29:51.936 1+0 records out 00:29:51.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670435 s, 6.1 MB/s 00:29:51.936 13:15:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.936 13:15:10 -- common/autotest_common.sh@874 -- # size=4096 00:29:51.936 13:15:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.936 13:15:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:51.936 13:15:10 -- common/autotest_common.sh@877 -- # return 0 00:29:51.936 13:15:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:51.936 13:15:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:51.936 13:15:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:51.936 13:15:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.936 13:15:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:52.194 13:15:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:52.194 { 00:29:52.194 "nbd_device": "/dev/nbd0", 00:29:52.194 "bdev_name": "Nvme0n1p1" 00:29:52.194 }, 00:29:52.194 { 00:29:52.194 "nbd_device": "/dev/nbd1", 00:29:52.194 "bdev_name": "Nvme0n1p2" 00:29:52.194 } 00:29:52.194 ]' 00:29:52.194 13:15:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:52.194 13:15:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:52.194 { 00:29:52.194 "nbd_device": "/dev/nbd0", 00:29:52.194 "bdev_name": "Nvme0n1p1" 00:29:52.194 }, 00:29:52.194 { 00:29:52.194 "nbd_device": "/dev/nbd1", 00:29:52.194 "bdev_name": "Nvme0n1p2" 00:29:52.194 } 00:29:52.194 ]' 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:52.194 /dev/nbd1' 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:52.194 /dev/nbd1' 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@65 -- # count=2 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@95 -- # count=2 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:52.194 13:15:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:52.453 256+0 records in 00:29:52.453 256+0 records out 00:29:52.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00754093 s, 139 MB/s 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:52.453 256+0 records in 00:29:52.453 256+0 records out 00:29:52.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104545 s, 10.0 MB/s 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:52.453 256+0 records in 00:29:52.453 256+0 records out 00:29:52.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0869595 s, 12.1 MB/s 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@51 -- # local i 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:52.453 13:15:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@41 -- # break 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@45 -- # return 0 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:52.712 13:15:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:52.970 13:15:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:52.970 13:15:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:52.970 13:15:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:52.970 13:15:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:52.970 13:15:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:52.971 13:15:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:52.971 13:15:11 -- bdev/nbd_common.sh@41 -- # break 00:29:52.971 13:15:11 -- bdev/nbd_common.sh@45 -- # return 0 00:29:52.971 13:15:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:52.971 13:15:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.971 13:15:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@65 -- # true 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@65 -- # count=0 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@104 -- # count=0 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@109 -- # return 0 00:29:53.538 13:15:12 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:53.538 13:15:12 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:53.796 malloc_lvol_verify 00:29:53.796 13:15:12 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:54.054 4fee5bae-2228-49d2-a6c6-c52ddc7c69fc 00:29:54.054 13:15:12 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:54.313 9283075f-a028-4808-a4e6-2bae5b4c59a8 00:29:54.313 13:15:12 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:54.572 /dev/nbd0 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:54.572 mke2fs 1.45.5 (07-Jan-2020) 00:29:54.572 00:29:54.572 Filesystem too small for a journal 00:29:54.572 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:54.572 00:29:54.572 Allocating group tables: 0/1 done 00:29:54.572 Writing inode tables: 0/1 done 00:29:54.572 Writing superblocks and filesystem accounting information: 0/1 done 00:29:54.572 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@51 -- # local i 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.572 13:15:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@41 -- # break 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@45 -- # return 0 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:54.831 13:15:13 -- bdev/nbd_common.sh@147 -- # return 0 00:29:54.831 13:15:13 -- bdev/blockdev.sh@324 -- # killprocess 142264 00:29:54.831 13:15:13 -- common/autotest_common.sh@926 -- # '[' -z 142264 ']' 00:29:54.831 13:15:13 -- common/autotest_common.sh@930 -- # kill -0 142264 00:29:54.831 13:15:13 -- common/autotest_common.sh@931 -- # uname 00:29:54.831 13:15:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:54.831 13:15:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142264 00:29:54.831 13:15:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:54.831 killing process with pid 142264 00:29:54.831 13:15:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:54.831 13:15:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142264' 00:29:54.831 13:15:13 -- common/autotest_common.sh@945 -- # kill 142264 00:29:54.831 13:15:13 -- common/autotest_common.sh@950 -- # wait 142264 00:29:56.209 13:15:14 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:56.209 ************************************ 00:29:56.209 END TEST bdev_nbd 00:29:56.209 ************************************ 00:29:56.209 00:29:56.209 real 0m7.108s 00:29:56.209 user 0m10.226s 00:29:56.209 sys 0m1.695s 00:29:56.209 13:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.209 13:15:14 -- common/autotest_common.sh@10 -- # set +x 00:29:56.209 13:15:14 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:56.209 skipping fio tests on NVMe due to multi-ns failures. 00:29:56.210 13:15:14 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:29:56.210 13:15:14 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:29:56.210 13:15:14 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:56.210 13:15:14 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:56.210 13:15:14 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:56.210 13:15:14 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:56.210 13:15:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.210 13:15:14 -- common/autotest_common.sh@10 -- # set +x 00:29:56.210 ************************************ 00:29:56.210 START TEST bdev_verify 00:29:56.210 ************************************ 00:29:56.210 13:15:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:56.210 [2024-06-11 13:15:14.738448] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:56.210 [2024-06-11 13:15:14.738599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142533 ] 00:29:56.210 [2024-06-11 13:15:14.896168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:56.469 [2024-06-11 13:15:15.078758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.469 [2024-06-11 13:15:15.078764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.727 Running I/O for 5 seconds... 00:30:01.993 00:30:01.993 Latency(us) 00:30:01.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.994 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:01.994 Verification LBA range: start 0x0 length 0x4ff80 00:30:01.994 Nvme0n1p1 : 5.02 5454.77 21.31 0.00 0.00 23403.57 2815.07 22163.08 00:30:01.994 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:01.994 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:01.994 Nvme0n1p1 : 5.02 5401.09 21.10 0.00 0.00 23637.80 1817.13 28716.68 00:30:01.994 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:01.994 Verification LBA range: start 0x0 length 0x4ff7f 00:30:01.994 Nvme0n1p2 : 5.02 5459.16 21.32 0.00 0.00 23372.31 387.26 21567.30 00:30:01.994 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:01.994 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:01.994 Nvme0n1p2 : 5.02 5398.89 21.09 0.00 0.00 23608.41 2949.12 25856.93 00:30:01.994 =================================================================================================================== 00:30:01.994 Total : 21713.91 84.82 0.00 0.00 23504.92 387.26 28716.68 00:30:05.277 ************************************ 00:30:05.277 END TEST bdev_verify 00:30:05.277 ************************************ 00:30:05.277 00:30:05.277 real 0m9.148s 00:30:05.277 user 0m17.188s 00:30:05.277 sys 0m0.260s 00:30:05.277 13:15:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.277 13:15:23 -- common/autotest_common.sh@10 -- # set +x 00:30:05.277 13:15:23 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:05.277 13:15:23 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:05.277 13:15:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:05.277 13:15:23 -- common/autotest_common.sh@10 -- # set +x 00:30:05.277 ************************************ 00:30:05.277 START TEST bdev_verify_big_io 00:30:05.277 ************************************ 00:30:05.277 13:15:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:05.277 [2024-06-11 13:15:23.944088] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:05.277 [2024-06-11 13:15:23.944241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142682 ] 00:30:05.277 [2024-06-11 13:15:24.099826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:05.538 [2024-06-11 13:15:24.298697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.538 [2024-06-11 13:15:24.298697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.111 Running I/O for 5 seconds... 00:30:11.376 00:30:11.376 Latency(us) 00:30:11.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.376 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:11.376 Verification LBA range: start 0x0 length 0x4ff8 00:30:11.376 Nvme0n1p1 : 5.09 1124.22 70.26 0.00 0.00 112832.29 2234.18 160146.15 00:30:11.376 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:11.376 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:11.376 Nvme0n1p1 : 5.08 1293.59 80.85 0.00 0.00 98126.79 2755.49 140127.88 00:30:11.376 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:11.376 Verification LBA range: start 0x0 length 0x4ff7 00:30:11.376 Nvme0n1p2 : 5.09 1123.75 70.23 0.00 0.00 111731.57 2919.33 121539.49 00:30:11.376 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:11.376 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:11.376 Nvme0n1p2 : 5.09 1293.02 80.81 0.00 0.00 97322.20 3455.53 118203.11 00:30:11.376 =================================================================================================================== 00:30:11.376 Total : 4834.58 302.16 0.00 0.00 104494.68 2234.18 160146.15 00:30:12.750 00:30:12.751 real 0m7.462s 00:30:12.751 user 0m13.801s 00:30:12.751 sys 0m0.261s 00:30:12.751 ************************************ 00:30:12.751 END TEST bdev_verify_big_io 00:30:12.751 ************************************ 00:30:12.751 13:15:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:12.751 13:15:31 -- common/autotest_common.sh@10 -- # set +x 00:30:12.751 13:15:31 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:12.751 13:15:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:12.751 13:15:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:12.751 13:15:31 -- common/autotest_common.sh@10 -- # set +x 00:30:12.751 ************************************ 00:30:12.751 START TEST bdev_write_zeroes 00:30:12.751 ************************************ 00:30:12.751 13:15:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:12.751 [2024-06-11 13:15:31.465241] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:12.751 [2024-06-11 13:15:31.465435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142808 ] 00:30:13.009 [2024-06-11 13:15:31.624200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.009 [2024-06-11 13:15:31.795167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.574 Running I/O for 1 seconds... 00:30:14.507 00:30:14.507 Latency(us) 00:30:14.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.507 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:14.507 Nvme0n1p1 : 1.00 29099.89 113.67 0.00 0.00 4389.42 2204.39 15728.64 00:30:14.507 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:14.507 Nvme0n1p2 : 1.01 29159.14 113.90 0.00 0.00 4373.73 2263.97 11915.64 00:30:14.507 =================================================================================================================== 00:30:14.507 Total : 58259.04 227.57 0.00 0.00 4381.56 2204.39 15728.64 00:30:15.454 00:30:15.454 real 0m2.784s 00:30:15.454 user 0m2.450s 00:30:15.454 sys 0m0.235s 00:30:15.454 13:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:15.454 13:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.454 ************************************ 00:30:15.454 END TEST bdev_write_zeroes 00:30:15.454 ************************************ 00:30:15.454 13:15:34 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:15.454 13:15:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:15.454 13:15:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:15.454 13:15:34 -- common/autotest_common.sh@10 -- # set +x 00:30:15.454 ************************************ 00:30:15.454 START TEST bdev_json_nonenclosed 00:30:15.454 ************************************ 00:30:15.454 13:15:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:15.712 [2024-06-11 13:15:34.310862] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:15.712 [2024-06-11 13:15:34.311063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142868 ] 00:30:15.712 [2024-06-11 13:15:34.477606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.970 [2024-06-11 13:15:34.672864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.970 [2024-06-11 13:15:34.673046] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:15.970 [2024-06-11 13:15:34.673086] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:16.229 00:30:16.229 real 0m0.797s 00:30:16.229 user 0m0.556s 00:30:16.229 sys 0m0.140s 00:30:16.229 13:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.229 ************************************ 00:30:16.229 END TEST bdev_json_nonenclosed 00:30:16.229 ************************************ 00:30:16.229 13:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:16.488 13:15:35 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:16.488 13:15:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:16.488 13:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:16.488 13:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:16.488 ************************************ 00:30:16.488 START TEST bdev_json_nonarray 00:30:16.488 ************************************ 00:30:16.488 13:15:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:16.488 [2024-06-11 13:15:35.151218] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:16.488 [2024-06-11 13:15:35.151354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142906 ] 00:30:16.488 [2024-06-11 13:15:35.308989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.747 [2024-06-11 13:15:35.502538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.747 [2024-06-11 13:15:35.502748] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:16.747 [2024-06-11 13:15:35.502806] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:17.315 00:30:17.315 real 0m0.774s 00:30:17.315 user 0m0.550s 00:30:17.315 sys 0m0.125s 00:30:17.315 13:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:17.315 ************************************ 00:30:17.315 END TEST bdev_json_nonarray 00:30:17.315 ************************************ 00:30:17.315 13:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:17.315 13:15:35 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:30:17.315 13:15:35 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:30:17.315 13:15:35 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:17.315 13:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:17.315 13:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:17.315 13:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:17.315 ************************************ 00:30:17.315 START TEST bdev_gpt_uuid 00:30:17.315 ************************************ 00:30:17.315 13:15:35 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:30:17.315 13:15:35 -- bdev/blockdev.sh@612 -- # local bdev 00:30:17.315 13:15:35 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:30:17.315 13:15:35 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=142940 00:30:17.315 13:15:35 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:17.315 13:15:35 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:17.315 13:15:35 -- bdev/blockdev.sh@47 -- # waitforlisten 142940 00:30:17.315 13:15:35 -- common/autotest_common.sh@819 -- # '[' -z 142940 ']' 00:30:17.315 13:15:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.315 13:15:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:17.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.315 13:15:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.315 13:15:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:17.315 13:15:35 -- common/autotest_common.sh@10 -- # set +x 00:30:17.315 [2024-06-11 13:15:35.994920] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:17.315 [2024-06-11 13:15:35.995061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142940 ] 00:30:17.315 [2024-06-11 13:15:36.152376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.574 [2024-06-11 13:15:36.354118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:17.574 [2024-06-11 13:15:36.354326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.949 13:15:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:18.950 13:15:37 -- common/autotest_common.sh@852 -- # return 0 00:30:18.950 13:15:37 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:18.950 13:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.950 13:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.208 Some configs were skipped because the RPC state that can call them passed over. 00:30:19.208 13:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.208 13:15:37 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:30:19.208 13:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.208 13:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.208 13:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.208 13:15:37 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:19.208 13:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.208 13:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.208 13:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.208 13:15:37 -- bdev/blockdev.sh@619 -- # bdev='[ 00:30:19.208 { 00:30:19.208 "name": "Nvme0n1p1", 00:30:19.208 "aliases": [ 00:30:19.208 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:19.208 ], 00:30:19.208 "product_name": "GPT Disk", 00:30:19.208 "block_size": 4096, 00:30:19.208 "num_blocks": 655104, 00:30:19.208 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:19.208 "assigned_rate_limits": { 00:30:19.208 "rw_ios_per_sec": 0, 00:30:19.208 "rw_mbytes_per_sec": 0, 00:30:19.208 "r_mbytes_per_sec": 0, 00:30:19.208 "w_mbytes_per_sec": 0 00:30:19.208 }, 00:30:19.208 "claimed": false, 00:30:19.208 "zoned": false, 00:30:19.208 "supported_io_types": { 00:30:19.208 "read": true, 00:30:19.208 "write": true, 00:30:19.208 "unmap": true, 00:30:19.208 "write_zeroes": true, 00:30:19.208 "flush": true, 00:30:19.208 "reset": true, 00:30:19.208 "compare": true, 00:30:19.208 "compare_and_write": false, 00:30:19.208 "abort": true, 00:30:19.208 "nvme_admin": false, 00:30:19.208 "nvme_io": false 00:30:19.208 }, 00:30:19.208 "driver_specific": { 00:30:19.208 "gpt": { 00:30:19.208 "base_bdev": "Nvme0n1", 00:30:19.208 "offset_blocks": 256, 00:30:19.208 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:19.208 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:19.208 "partition_name": "SPDK_TEST_first" 00:30:19.208 } 00:30:19.208 } 00:30:19.208 } 00:30:19.208 ]' 00:30:19.208 13:15:37 -- bdev/blockdev.sh@620 -- # jq -r length 00:30:19.208 13:15:37 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:30:19.208 13:15:37 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:30:19.208 13:15:37 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:19.208 13:15:37 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:19.208 13:15:37 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:19.208 13:15:37 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:19.208 13:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.208 13:15:37 -- common/autotest_common.sh@10 -- # set +x 00:30:19.208 13:15:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.208 13:15:38 -- bdev/blockdev.sh@624 -- # bdev='[ 00:30:19.208 { 00:30:19.208 "name": "Nvme0n1p2", 00:30:19.208 "aliases": [ 00:30:19.208 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:19.208 ], 00:30:19.208 "product_name": "GPT Disk", 00:30:19.208 "block_size": 4096, 00:30:19.209 "num_blocks": 655103, 00:30:19.209 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:19.209 "assigned_rate_limits": { 00:30:19.209 "rw_ios_per_sec": 0, 00:30:19.209 "rw_mbytes_per_sec": 0, 00:30:19.209 "r_mbytes_per_sec": 0, 00:30:19.209 "w_mbytes_per_sec": 0 00:30:19.209 }, 00:30:19.209 "claimed": false, 00:30:19.209 "zoned": false, 00:30:19.209 "supported_io_types": { 00:30:19.209 "read": true, 00:30:19.209 "write": true, 00:30:19.209 "unmap": true, 00:30:19.209 "write_zeroes": true, 00:30:19.209 "flush": true, 00:30:19.209 "reset": true, 00:30:19.209 "compare": true, 00:30:19.209 "compare_and_write": false, 00:30:19.209 "abort": true, 00:30:19.209 "nvme_admin": false, 00:30:19.209 "nvme_io": false 00:30:19.209 }, 00:30:19.209 "driver_specific": { 00:30:19.209 "gpt": { 00:30:19.209 "base_bdev": "Nvme0n1", 00:30:19.209 "offset_blocks": 655360, 00:30:19.209 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:19.209 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:19.209 "partition_name": "SPDK_TEST_second" 00:30:19.209 } 00:30:19.209 } 00:30:19.209 } 00:30:19.209 ]' 00:30:19.209 13:15:38 -- bdev/blockdev.sh@625 -- # jq -r length 00:30:19.467 13:15:38 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:30:19.467 13:15:38 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:30:19.467 13:15:38 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:19.467 13:15:38 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:19.467 13:15:38 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:19.467 13:15:38 -- bdev/blockdev.sh@629 -- # killprocess 142940 00:30:19.467 13:15:38 -- common/autotest_common.sh@926 -- # '[' -z 142940 ']' 00:30:19.467 13:15:38 -- common/autotest_common.sh@930 -- # kill -0 142940 00:30:19.467 13:15:38 -- common/autotest_common.sh@931 -- # uname 00:30:19.467 13:15:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:19.467 13:15:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142940 00:30:19.467 13:15:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:19.467 killing process with pid 142940 00:30:19.467 13:15:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:19.467 13:15:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142940' 00:30:19.467 13:15:38 -- common/autotest_common.sh@945 -- # kill 142940 00:30:19.467 13:15:38 -- common/autotest_common.sh@950 -- # wait 142940 00:30:21.371 00:30:21.371 real 0m4.254s 00:30:21.371 user 0m4.712s 00:30:21.371 sys 0m0.513s 00:30:21.371 13:15:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.371 ************************************ 00:30:21.371 END TEST bdev_gpt_uuid 00:30:21.371 ************************************ 00:30:21.371 13:15:40 -- common/autotest_common.sh@10 -- # set +x 00:30:21.630 13:15:40 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:30:21.630 13:15:40 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:21.630 13:15:40 -- bdev/blockdev.sh@809 -- # cleanup 00:30:21.630 13:15:40 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:21.630 13:15:40 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:21.630 13:15:40 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:30:21.630 13:15:40 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:30:21.630 13:15:40 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:30:21.630 13:15:40 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:21.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:21.889 Waiting for block devices as requested 00:30:21.889 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:21.889 13:15:40 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:30:21.889 13:15:40 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:30:21.889 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:21.889 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:21.889 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:21.889 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:21.889 13:15:40 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:30:21.889 00:30:21.889 real 0m46.401s 00:30:21.889 user 1m6.485s 00:30:21.889 sys 0m6.225s 00:30:21.889 13:15:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.889 13:15:40 -- common/autotest_common.sh@10 -- # set +x 00:30:21.889 ************************************ 00:30:21.889 END TEST blockdev_nvme_gpt 00:30:21.889 ************************************ 00:30:22.148 13:15:40 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:22.148 13:15:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:22.148 13:15:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:22.148 13:15:40 -- common/autotest_common.sh@10 -- # set +x 00:30:22.148 ************************************ 00:30:22.148 START TEST nvme 00:30:22.148 ************************************ 00:30:22.148 13:15:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:22.148 * Looking for test storage... 00:30:22.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:22.148 13:15:40 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:22.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:22.666 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:23.601 13:15:42 -- nvme/nvme.sh@79 -- # uname 00:30:23.601 13:15:42 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:23.601 13:15:42 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:23.601 13:15:42 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:23.601 13:15:42 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:23.601 Waiting for stub to ready for secondary processes... 00:30:23.601 13:15:42 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:30:23.601 13:15:42 -- common/autotest_common.sh@1045 -- # echo 0 00:30:23.601 13:15:42 -- common/autotest_common.sh@1047 -- # stubpid=143392 00:30:23.601 13:15:42 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:23.601 13:15:42 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:30:23.601 13:15:42 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:23.601 13:15:42 -- common/autotest_common.sh@1051 -- # [[ -e /proc/143392 ]] 00:30:23.601 13:15:42 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:23.601 [2024-06-11 13:15:42.416090] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:23.601 [2024-06-11 13:15:42.416406] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.537 13:15:43 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:24.537 13:15:43 -- common/autotest_common.sh@1051 -- # [[ -e /proc/143392 ]] 00:30:24.537 13:15:43 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:25.108 [2024-06-11 13:15:43.794273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:25.366 [2024-06-11 13:15:44.033008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.366 [2024-06-11 13:15:44.033171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.366 [2024-06-11 13:15:44.033175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.366 [2024-06-11 13:15:44.053611] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:25.366 [2024-06-11 13:15:44.062556] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:25.366 [2024-06-11 13:15:44.063241] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:25.625 done. 00:30:25.625 13:15:44 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:25.625 13:15:44 -- common/autotest_common.sh@1054 -- # echo done. 00:30:25.625 13:15:44 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:25.625 13:15:44 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:30:25.625 13:15:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:25.625 13:15:44 -- common/autotest_common.sh@10 -- # set +x 00:30:25.625 ************************************ 00:30:25.625 START TEST nvme_reset 00:30:25.625 ************************************ 00:30:25.625 13:15:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:25.883 Initializing NVMe Controllers 00:30:25.883 Skipping QEMU NVMe SSD at 0000:00:06.0 00:30:25.883 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:26.141 ************************************ 00:30:26.141 END TEST nvme_reset 00:30:26.141 ************************************ 00:30:26.141 00:30:26.141 real 0m0.337s 00:30:26.141 user 0m0.132s 00:30:26.141 sys 0m0.131s 00:30:26.141 13:15:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.141 13:15:44 -- common/autotest_common.sh@10 -- # set +x 00:30:26.141 13:15:44 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:26.141 13:15:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:26.141 13:15:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.141 13:15:44 -- common/autotest_common.sh@10 -- # set +x 00:30:26.141 ************************************ 00:30:26.141 START TEST nvme_identify 00:30:26.141 ************************************ 00:30:26.141 13:15:44 -- common/autotest_common.sh@1104 -- # nvme_identify 00:30:26.141 13:15:44 -- nvme/nvme.sh@12 -- # bdfs=() 00:30:26.141 13:15:44 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:26.141 13:15:44 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:26.141 13:15:44 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:26.141 13:15:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:26.141 13:15:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:26.141 13:15:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:26.141 13:15:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:26.141 13:15:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:26.141 13:15:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:26.141 13:15:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:26.141 13:15:44 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:26.400 [2024-06-11 13:15:45.109325] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 143436 terminated unexpected 00:30:26.400 ===================================================== 00:30:26.400 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:26.400 ===================================================== 00:30:26.400 Controller Capabilities/Features 00:30:26.400 ================================ 00:30:26.400 Vendor ID: 1b36 00:30:26.400 Subsystem Vendor ID: 1af4 00:30:26.400 Serial Number: 12340 00:30:26.400 Model Number: QEMU NVMe Ctrl 00:30:26.400 Firmware Version: 8.0.0 00:30:26.400 Recommended Arb Burst: 6 00:30:26.400 IEEE OUI Identifier: 00 54 52 00:30:26.400 Multi-path I/O 00:30:26.400 May have multiple subsystem ports: No 00:30:26.400 May have multiple controllers: No 00:30:26.400 Associated with SR-IOV VF: No 00:30:26.400 Max Data Transfer Size: 524288 00:30:26.400 Max Number of Namespaces: 256 00:30:26.400 Max Number of I/O Queues: 64 00:30:26.400 NVMe Specification Version (VS): 1.4 00:30:26.400 NVMe Specification Version (Identify): 1.4 00:30:26.400 Maximum Queue Entries: 2048 00:30:26.400 Contiguous Queues Required: Yes 00:30:26.400 Arbitration Mechanisms Supported 00:30:26.400 Weighted Round Robin: Not Supported 00:30:26.400 Vendor Specific: Not Supported 00:30:26.400 Reset Timeout: 7500 ms 00:30:26.400 Doorbell Stride: 4 bytes 00:30:26.400 NVM Subsystem Reset: Not Supported 00:30:26.400 Command Sets Supported 00:30:26.400 NVM Command Set: Supported 00:30:26.400 Boot Partition: Not Supported 00:30:26.400 Memory Page Size Minimum: 4096 bytes 00:30:26.400 Memory Page Size Maximum: 65536 bytes 00:30:26.400 Persistent Memory Region: Not Supported 00:30:26.400 Optional Asynchronous Events Supported 00:30:26.400 Namespace Attribute Notices: Supported 00:30:26.400 Firmware Activation Notices: Not Supported 00:30:26.400 ANA Change Notices: Not Supported 00:30:26.400 PLE Aggregate Log Change Notices: Not Supported 00:30:26.400 LBA Status Info Alert Notices: Not Supported 00:30:26.400 EGE Aggregate Log Change Notices: Not Supported 00:30:26.400 Normal NVM Subsystem Shutdown event: Not Supported 00:30:26.400 Zone Descriptor Change Notices: Not Supported 00:30:26.400 Discovery Log Change Notices: Not Supported 00:30:26.400 Controller Attributes 00:30:26.400 128-bit Host Identifier: Not Supported 00:30:26.400 Non-Operational Permissive Mode: Not Supported 00:30:26.400 NVM Sets: Not Supported 00:30:26.400 Read Recovery Levels: Not Supported 00:30:26.400 Endurance Groups: Not Supported 00:30:26.400 Predictable Latency Mode: Not Supported 00:30:26.400 Traffic Based Keep ALive: Not Supported 00:30:26.400 Namespace Granularity: Not Supported 00:30:26.400 SQ Associations: Not Supported 00:30:26.400 UUID List: Not Supported 00:30:26.400 Multi-Domain Subsystem: Not Supported 00:30:26.400 Fixed Capacity Management: Not Supported 00:30:26.400 Variable Capacity Management: Not Supported 00:30:26.400 Delete Endurance Group: Not Supported 00:30:26.400 Delete NVM Set: Not Supported 00:30:26.400 Extended LBA Formats Supported: Supported 00:30:26.400 Flexible Data Placement Supported: Not Supported 00:30:26.400 00:30:26.400 Controller Memory Buffer Support 00:30:26.400 ================================ 00:30:26.400 Supported: No 00:30:26.400 00:30:26.400 Persistent Memory Region Support 00:30:26.400 ================================ 00:30:26.400 Supported: No 00:30:26.400 00:30:26.400 Admin Command Set Attributes 00:30:26.400 ============================ 00:30:26.400 Security Send/Receive: Not Supported 00:30:26.400 Format NVM: Supported 00:30:26.400 Firmware Activate/Download: Not Supported 00:30:26.400 Namespace Management: Supported 00:30:26.400 Device Self-Test: Not Supported 00:30:26.400 Directives: Supported 00:30:26.400 NVMe-MI: Not Supported 00:30:26.400 Virtualization Management: Not Supported 00:30:26.400 Doorbell Buffer Config: Supported 00:30:26.400 Get LBA Status Capability: Not Supported 00:30:26.400 Command & Feature Lockdown Capability: Not Supported 00:30:26.400 Abort Command Limit: 4 00:30:26.400 Async Event Request Limit: 4 00:30:26.400 Number of Firmware Slots: N/A 00:30:26.400 Firmware Slot 1 Read-Only: N/A 00:30:26.400 Firmware Activation Without Reset: N/A 00:30:26.400 Multiple Update Detection Support: N/A 00:30:26.400 Firmware Update Granularity: No Information Provided 00:30:26.400 Per-Namespace SMART Log: Yes 00:30:26.400 Asymmetric Namespace Access Log Page: Not Supported 00:30:26.400 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:26.400 Command Effects Log Page: Supported 00:30:26.400 Get Log Page Extended Data: Supported 00:30:26.400 Telemetry Log Pages: Not Supported 00:30:26.400 Persistent Event Log Pages: Not Supported 00:30:26.400 Supported Log Pages Log Page: May Support 00:30:26.400 Commands Supported & Effects Log Page: Not Supported 00:30:26.400 Feature Identifiers & Effects Log Page:May Support 00:30:26.400 NVMe-MI Commands & Effects Log Page: May Support 00:30:26.401 Data Area 4 for Telemetry Log: Not Supported 00:30:26.401 Error Log Page Entries Supported: 1 00:30:26.401 Keep Alive: Not Supported 00:30:26.401 00:30:26.401 NVM Command Set Attributes 00:30:26.401 ========================== 00:30:26.401 Submission Queue Entry Size 00:30:26.401 Max: 64 00:30:26.401 Min: 64 00:30:26.401 Completion Queue Entry Size 00:30:26.401 Max: 16 00:30:26.401 Min: 16 00:30:26.401 Number of Namespaces: 256 00:30:26.401 Compare Command: Supported 00:30:26.401 Write Uncorrectable Command: Not Supported 00:30:26.401 Dataset Management Command: Supported 00:30:26.401 Write Zeroes Command: Supported 00:30:26.401 Set Features Save Field: Supported 00:30:26.401 Reservations: Not Supported 00:30:26.401 Timestamp: Supported 00:30:26.401 Copy: Supported 00:30:26.401 Volatile Write Cache: Present 00:30:26.401 Atomic Write Unit (Normal): 1 00:30:26.401 Atomic Write Unit (PFail): 1 00:30:26.401 Atomic Compare & Write Unit: 1 00:30:26.401 Fused Compare & Write: Not Supported 00:30:26.401 Scatter-Gather List 00:30:26.401 SGL Command Set: Supported 00:30:26.401 SGL Keyed: Not Supported 00:30:26.401 SGL Bit Bucket Descriptor: Not Supported 00:30:26.401 SGL Metadata Pointer: Not Supported 00:30:26.401 Oversized SGL: Not Supported 00:30:26.401 SGL Metadata Address: Not Supported 00:30:26.401 SGL Offset: Not Supported 00:30:26.401 Transport SGL Data Block: Not Supported 00:30:26.401 Replay Protected Memory Block: Not Supported 00:30:26.401 00:30:26.401 Firmware Slot Information 00:30:26.401 ========================= 00:30:26.401 Active slot: 1 00:30:26.401 Slot 1 Firmware Revision: 1.0 00:30:26.401 00:30:26.401 00:30:26.401 Commands Supported and Effects 00:30:26.401 ============================== 00:30:26.401 Admin Commands 00:30:26.401 -------------- 00:30:26.401 Delete I/O Submission Queue (00h): Supported 00:30:26.401 Create I/O Submission Queue (01h): Supported 00:30:26.401 Get Log Page (02h): Supported 00:30:26.401 Delete I/O Completion Queue (04h): Supported 00:30:26.401 Create I/O Completion Queue (05h): Supported 00:30:26.401 Identify (06h): Supported 00:30:26.401 Abort (08h): Supported 00:30:26.401 Set Features (09h): Supported 00:30:26.401 Get Features (0Ah): Supported 00:30:26.401 Asynchronous Event Request (0Ch): Supported 00:30:26.401 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:26.401 Directive Send (19h): Supported 00:30:26.401 Directive Receive (1Ah): Supported 00:30:26.401 Virtualization Management (1Ch): Supported 00:30:26.401 Doorbell Buffer Config (7Ch): Supported 00:30:26.401 Format NVM (80h): Supported LBA-Change 00:30:26.401 I/O Commands 00:30:26.401 ------------ 00:30:26.401 Flush (00h): Supported LBA-Change 00:30:26.401 Write (01h): Supported LBA-Change 00:30:26.401 Read (02h): Supported 00:30:26.401 Compare (05h): Supported 00:30:26.401 Write Zeroes (08h): Supported LBA-Change 00:30:26.401 Dataset Management (09h): Supported LBA-Change 00:30:26.401 Unknown (0Ch): Supported 00:30:26.401 Unknown (12h): Supported 00:30:26.401 Copy (19h): Supported LBA-Change 00:30:26.401 Unknown (1Dh): Supported LBA-Change 00:30:26.401 00:30:26.401 Error Log 00:30:26.401 ========= 00:30:26.401 00:30:26.401 Arbitration 00:30:26.401 =========== 00:30:26.401 Arbitration Burst: no limit 00:30:26.401 00:30:26.401 Power Management 00:30:26.401 ================ 00:30:26.401 Number of Power States: 1 00:30:26.401 Current Power State: Power State #0 00:30:26.401 Power State #0: 00:30:26.401 Max Power: 25.00 W 00:30:26.401 Non-Operational State: Operational 00:30:26.401 Entry Latency: 16 microseconds 00:30:26.401 Exit Latency: 4 microseconds 00:30:26.401 Relative Read Throughput: 0 00:30:26.401 Relative Read Latency: 0 00:30:26.401 Relative Write Throughput: 0 00:30:26.401 Relative Write Latency: 0 00:30:26.401 Idle Power: Not Reported 00:30:26.401 Active Power: Not Reported 00:30:26.401 Non-Operational Permissive Mode: Not Supported 00:30:26.401 00:30:26.401 Health Information 00:30:26.401 ================== 00:30:26.401 Critical Warnings: 00:30:26.401 Available Spare Space: OK 00:30:26.401 Temperature: OK 00:30:26.401 Device Reliability: OK 00:30:26.401 Read Only: No 00:30:26.401 Volatile Memory Backup: OK 00:30:26.401 Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.401 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:26.401 Available Spare: 0% 00:30:26.401 Available Spare Threshold: 0% 00:30:26.401 Life Percentage Used: 0% 00:30:26.401 Data Units Read: 8681 00:30:26.401 Data Units Written: 4237 00:30:26.401 Host Read Commands: 308350 00:30:26.401 Host Write Commands: 169387 00:30:26.401 Controller Busy Time: 0 minutes 00:30:26.401 Power Cycles: 0 00:30:26.401 Power On Hours: 0 hours 00:30:26.401 Unsafe Shutdowns: 0 00:30:26.401 Unrecoverable Media Errors: 0 00:30:26.401 Lifetime Error Log Entries: 0 00:30:26.401 Warning Temperature Time: 0 minutes 00:30:26.401 Critical Temperature Time: 0 minutes 00:30:26.401 00:30:26.401 Number of Queues 00:30:26.401 ================ 00:30:26.401 Number of I/O Submission Queues: 64 00:30:26.401 Number of I/O Completion Queues: 64 00:30:26.401 00:30:26.401 ZNS Specific Controller Data 00:30:26.401 ============================ 00:30:26.401 Zone Append Size Limit: 0 00:30:26.401 00:30:26.401 00:30:26.401 Active Namespaces 00:30:26.401 ================= 00:30:26.401 Namespace ID:1 00:30:26.401 Error Recovery Timeout: Unlimited 00:30:26.401 Command Set Identifier: NVM (00h) 00:30:26.401 Deallocate: Supported 00:30:26.401 Deallocated/Unwritten Error: Supported 00:30:26.401 Deallocated Read Value: All 0x00 00:30:26.401 Deallocate in Write Zeroes: Not Supported 00:30:26.401 Deallocated Guard Field: 0xFFFF 00:30:26.401 Flush: Supported 00:30:26.401 Reservation: Not Supported 00:30:26.401 Namespace Sharing Capabilities: Private 00:30:26.401 Size (in LBAs): 1310720 (5GiB) 00:30:26.401 Capacity (in LBAs): 1310720 (5GiB) 00:30:26.401 Utilization (in LBAs): 1310720 (5GiB) 00:30:26.401 Thin Provisioning: Not Supported 00:30:26.401 Per-NS Atomic Units: No 00:30:26.401 Maximum Single Source Range Length: 128 00:30:26.401 Maximum Copy Length: 128 00:30:26.401 Maximum Source Range Count: 128 00:30:26.401 NGUID/EUI64 Never Reused: No 00:30:26.401 Namespace Write Protected: No 00:30:26.401 Number of LBA Formats: 8 00:30:26.401 Current LBA Format: LBA Format #04 00:30:26.401 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:26.401 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:26.401 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:26.401 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:26.401 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:26.401 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:26.401 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:26.401 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:26.401 00:30:26.401 13:15:45 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:26.401 13:15:45 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:30:26.661 ===================================================== 00:30:26.661 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:26.661 ===================================================== 00:30:26.661 Controller Capabilities/Features 00:30:26.661 ================================ 00:30:26.661 Vendor ID: 1b36 00:30:26.661 Subsystem Vendor ID: 1af4 00:30:26.661 Serial Number: 12340 00:30:26.661 Model Number: QEMU NVMe Ctrl 00:30:26.661 Firmware Version: 8.0.0 00:30:26.661 Recommended Arb Burst: 6 00:30:26.661 IEEE OUI Identifier: 00 54 52 00:30:26.661 Multi-path I/O 00:30:26.661 May have multiple subsystem ports: No 00:30:26.661 May have multiple controllers: No 00:30:26.661 Associated with SR-IOV VF: No 00:30:26.661 Max Data Transfer Size: 524288 00:30:26.661 Max Number of Namespaces: 256 00:30:26.661 Max Number of I/O Queues: 64 00:30:26.661 NVMe Specification Version (VS): 1.4 00:30:26.661 NVMe Specification Version (Identify): 1.4 00:30:26.661 Maximum Queue Entries: 2048 00:30:26.661 Contiguous Queues Required: Yes 00:30:26.661 Arbitration Mechanisms Supported 00:30:26.661 Weighted Round Robin: Not Supported 00:30:26.661 Vendor Specific: Not Supported 00:30:26.661 Reset Timeout: 7500 ms 00:30:26.661 Doorbell Stride: 4 bytes 00:30:26.661 NVM Subsystem Reset: Not Supported 00:30:26.661 Command Sets Supported 00:30:26.661 NVM Command Set: Supported 00:30:26.661 Boot Partition: Not Supported 00:30:26.661 Memory Page Size Minimum: 4096 bytes 00:30:26.661 Memory Page Size Maximum: 65536 bytes 00:30:26.661 Persistent Memory Region: Not Supported 00:30:26.661 Optional Asynchronous Events Supported 00:30:26.661 Namespace Attribute Notices: Supported 00:30:26.661 Firmware Activation Notices: Not Supported 00:30:26.661 ANA Change Notices: Not Supported 00:30:26.661 PLE Aggregate Log Change Notices: Not Supported 00:30:26.661 LBA Status Info Alert Notices: Not Supported 00:30:26.661 EGE Aggregate Log Change Notices: Not Supported 00:30:26.661 Normal NVM Subsystem Shutdown event: Not Supported 00:30:26.661 Zone Descriptor Change Notices: Not Supported 00:30:26.661 Discovery Log Change Notices: Not Supported 00:30:26.661 Controller Attributes 00:30:26.661 128-bit Host Identifier: Not Supported 00:30:26.661 Non-Operational Permissive Mode: Not Supported 00:30:26.661 NVM Sets: Not Supported 00:30:26.661 Read Recovery Levels: Not Supported 00:30:26.661 Endurance Groups: Not Supported 00:30:26.661 Predictable Latency Mode: Not Supported 00:30:26.661 Traffic Based Keep ALive: Not Supported 00:30:26.661 Namespace Granularity: Not Supported 00:30:26.661 SQ Associations: Not Supported 00:30:26.661 UUID List: Not Supported 00:30:26.661 Multi-Domain Subsystem: Not Supported 00:30:26.661 Fixed Capacity Management: Not Supported 00:30:26.661 Variable Capacity Management: Not Supported 00:30:26.661 Delete Endurance Group: Not Supported 00:30:26.661 Delete NVM Set: Not Supported 00:30:26.661 Extended LBA Formats Supported: Supported 00:30:26.661 Flexible Data Placement Supported: Not Supported 00:30:26.661 00:30:26.661 Controller Memory Buffer Support 00:30:26.661 ================================ 00:30:26.661 Supported: No 00:30:26.661 00:30:26.661 Persistent Memory Region Support 00:30:26.661 ================================ 00:30:26.661 Supported: No 00:30:26.661 00:30:26.661 Admin Command Set Attributes 00:30:26.661 ============================ 00:30:26.661 Security Send/Receive: Not Supported 00:30:26.661 Format NVM: Supported 00:30:26.661 Firmware Activate/Download: Not Supported 00:30:26.661 Namespace Management: Supported 00:30:26.661 Device Self-Test: Not Supported 00:30:26.661 Directives: Supported 00:30:26.661 NVMe-MI: Not Supported 00:30:26.661 Virtualization Management: Not Supported 00:30:26.661 Doorbell Buffer Config: Supported 00:30:26.661 Get LBA Status Capability: Not Supported 00:30:26.661 Command & Feature Lockdown Capability: Not Supported 00:30:26.661 Abort Command Limit: 4 00:30:26.661 Async Event Request Limit: 4 00:30:26.661 Number of Firmware Slots: N/A 00:30:26.661 Firmware Slot 1 Read-Only: N/A 00:30:26.661 Firmware Activation Without Reset: N/A 00:30:26.661 Multiple Update Detection Support: N/A 00:30:26.661 Firmware Update Granularity: No Information Provided 00:30:26.661 Per-Namespace SMART Log: Yes 00:30:26.661 Asymmetric Namespace Access Log Page: Not Supported 00:30:26.661 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:26.661 Command Effects Log Page: Supported 00:30:26.661 Get Log Page Extended Data: Supported 00:30:26.661 Telemetry Log Pages: Not Supported 00:30:26.661 Persistent Event Log Pages: Not Supported 00:30:26.661 Supported Log Pages Log Page: May Support 00:30:26.661 Commands Supported & Effects Log Page: Not Supported 00:30:26.661 Feature Identifiers & Effects Log Page:May Support 00:30:26.661 NVMe-MI Commands & Effects Log Page: May Support 00:30:26.661 Data Area 4 for Telemetry Log: Not Supported 00:30:26.661 Error Log Page Entries Supported: 1 00:30:26.661 Keep Alive: Not Supported 00:30:26.661 00:30:26.661 NVM Command Set Attributes 00:30:26.661 ========================== 00:30:26.661 Submission Queue Entry Size 00:30:26.661 Max: 64 00:30:26.661 Min: 64 00:30:26.661 Completion Queue Entry Size 00:30:26.661 Max: 16 00:30:26.661 Min: 16 00:30:26.661 Number of Namespaces: 256 00:30:26.661 Compare Command: Supported 00:30:26.661 Write Uncorrectable Command: Not Supported 00:30:26.661 Dataset Management Command: Supported 00:30:26.661 Write Zeroes Command: Supported 00:30:26.661 Set Features Save Field: Supported 00:30:26.661 Reservations: Not Supported 00:30:26.661 Timestamp: Supported 00:30:26.661 Copy: Supported 00:30:26.661 Volatile Write Cache: Present 00:30:26.661 Atomic Write Unit (Normal): 1 00:30:26.661 Atomic Write Unit (PFail): 1 00:30:26.661 Atomic Compare & Write Unit: 1 00:30:26.661 Fused Compare & Write: Not Supported 00:30:26.661 Scatter-Gather List 00:30:26.661 SGL Command Set: Supported 00:30:26.661 SGL Keyed: Not Supported 00:30:26.661 SGL Bit Bucket Descriptor: Not Supported 00:30:26.661 SGL Metadata Pointer: Not Supported 00:30:26.661 Oversized SGL: Not Supported 00:30:26.661 SGL Metadata Address: Not Supported 00:30:26.661 SGL Offset: Not Supported 00:30:26.661 Transport SGL Data Block: Not Supported 00:30:26.661 Replay Protected Memory Block: Not Supported 00:30:26.661 00:30:26.661 Firmware Slot Information 00:30:26.661 ========================= 00:30:26.661 Active slot: 1 00:30:26.661 Slot 1 Firmware Revision: 1.0 00:30:26.661 00:30:26.661 00:30:26.661 Commands Supported and Effects 00:30:26.661 ============================== 00:30:26.661 Admin Commands 00:30:26.661 -------------- 00:30:26.661 Delete I/O Submission Queue (00h): Supported 00:30:26.661 Create I/O Submission Queue (01h): Supported 00:30:26.661 Get Log Page (02h): Supported 00:30:26.661 Delete I/O Completion Queue (04h): Supported 00:30:26.661 Create I/O Completion Queue (05h): Supported 00:30:26.661 Identify (06h): Supported 00:30:26.661 Abort (08h): Supported 00:30:26.661 Set Features (09h): Supported 00:30:26.661 Get Features (0Ah): Supported 00:30:26.661 Asynchronous Event Request (0Ch): Supported 00:30:26.661 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:26.661 Directive Send (19h): Supported 00:30:26.661 Directive Receive (1Ah): Supported 00:30:26.661 Virtualization Management (1Ch): Supported 00:30:26.661 Doorbell Buffer Config (7Ch): Supported 00:30:26.661 Format NVM (80h): Supported LBA-Change 00:30:26.661 I/O Commands 00:30:26.661 ------------ 00:30:26.661 Flush (00h): Supported LBA-Change 00:30:26.661 Write (01h): Supported LBA-Change 00:30:26.661 Read (02h): Supported 00:30:26.661 Compare (05h): Supported 00:30:26.661 Write Zeroes (08h): Supported LBA-Change 00:30:26.661 Dataset Management (09h): Supported LBA-Change 00:30:26.661 Unknown (0Ch): Supported 00:30:26.662 Unknown (12h): Supported 00:30:26.662 Copy (19h): Supported LBA-Change 00:30:26.662 Unknown (1Dh): Supported LBA-Change 00:30:26.662 00:30:26.662 Error Log 00:30:26.662 ========= 00:30:26.662 00:30:26.662 Arbitration 00:30:26.662 =========== 00:30:26.662 Arbitration Burst: no limit 00:30:26.662 00:30:26.662 Power Management 00:30:26.662 ================ 00:30:26.662 Number of Power States: 1 00:30:26.662 Current Power State: Power State #0 00:30:26.662 Power State #0: 00:30:26.662 Max Power: 25.00 W 00:30:26.662 Non-Operational State: Operational 00:30:26.662 Entry Latency: 16 microseconds 00:30:26.662 Exit Latency: 4 microseconds 00:30:26.662 Relative Read Throughput: 0 00:30:26.662 Relative Read Latency: 0 00:30:26.662 Relative Write Throughput: 0 00:30:26.662 Relative Write Latency: 0 00:30:26.662 Idle Power: Not Reported 00:30:26.662 Active Power: Not Reported 00:30:26.662 Non-Operational Permissive Mode: Not Supported 00:30:26.662 00:30:26.662 Health Information 00:30:26.662 ================== 00:30:26.662 Critical Warnings: 00:30:26.662 Available Spare Space: OK 00:30:26.662 Temperature: OK 00:30:26.662 Device Reliability: OK 00:30:26.662 Read Only: No 00:30:26.662 Volatile Memory Backup: OK 00:30:26.662 Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.662 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:26.662 Available Spare: 0% 00:30:26.662 Available Spare Threshold: 0% 00:30:26.662 Life Percentage Used: 0% 00:30:26.662 Data Units Read: 8681 00:30:26.662 Data Units Written: 4237 00:30:26.662 Host Read Commands: 308350 00:30:26.662 Host Write Commands: 169387 00:30:26.662 Controller Busy Time: 0 minutes 00:30:26.662 Power Cycles: 0 00:30:26.662 Power On Hours: 0 hours 00:30:26.662 Unsafe Shutdowns: 0 00:30:26.662 Unrecoverable Media Errors: 0 00:30:26.662 Lifetime Error Log Entries: 0 00:30:26.662 Warning Temperature Time: 0 minutes 00:30:26.662 Critical Temperature Time: 0 minutes 00:30:26.662 00:30:26.662 Number of Queues 00:30:26.662 ================ 00:30:26.662 Number of I/O Submission Queues: 64 00:30:26.662 Number of I/O Completion Queues: 64 00:30:26.662 00:30:26.662 ZNS Specific Controller Data 00:30:26.662 ============================ 00:30:26.662 Zone Append Size Limit: 0 00:30:26.662 00:30:26.662 00:30:26.662 Active Namespaces 00:30:26.662 ================= 00:30:26.662 Namespace ID:1 00:30:26.662 Error Recovery Timeout: Unlimited 00:30:26.662 Command Set Identifier: NVM (00h) 00:30:26.662 Deallocate: Supported 00:30:26.662 Deallocated/Unwritten Error: Supported 00:30:26.662 Deallocated Read Value: All 0x00 00:30:26.662 Deallocate in Write Zeroes: Not Supported 00:30:26.662 Deallocated Guard Field: 0xFFFF 00:30:26.662 Flush: Supported 00:30:26.662 Reservation: Not Supported 00:30:26.662 Namespace Sharing Capabilities: Private 00:30:26.662 Size (in LBAs): 1310720 (5GiB) 00:30:26.662 Capacity (in LBAs): 1310720 (5GiB) 00:30:26.662 Utilization (in LBAs): 1310720 (5GiB) 00:30:26.662 Thin Provisioning: Not Supported 00:30:26.662 Per-NS Atomic Units: No 00:30:26.662 Maximum Single Source Range Length: 128 00:30:26.662 Maximum Copy Length: 128 00:30:26.662 Maximum Source Range Count: 128 00:30:26.662 NGUID/EUI64 Never Reused: No 00:30:26.662 Namespace Write Protected: No 00:30:26.662 Number of LBA Formats: 8 00:30:26.662 Current LBA Format: LBA Format #04 00:30:26.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:26.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:26.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:26.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:26.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:26.662 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:26.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:26.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:26.662 00:30:26.919 ************************************ 00:30:26.919 END TEST nvme_identify 00:30:26.919 ************************************ 00:30:26.919 00:30:26.919 real 0m0.726s 00:30:26.919 user 0m0.344s 00:30:26.919 sys 0m0.279s 00:30:26.919 13:15:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.919 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:30:26.919 13:15:45 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:26.919 13:15:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:26.919 13:15:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.919 13:15:45 -- common/autotest_common.sh@10 -- # set +x 00:30:26.919 ************************************ 00:30:26.919 START TEST nvme_perf 00:30:26.919 ************************************ 00:30:26.919 13:15:45 -- common/autotest_common.sh@1104 -- # nvme_perf 00:30:26.919 13:15:45 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:28.294 Initializing NVMe Controllers 00:30:28.294 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:28.294 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:28.294 Initialization complete. Launching workers. 00:30:28.294 ======================================================== 00:30:28.294 Latency(us) 00:30:28.294 Device Information : IOPS MiB/s Average min max 00:30:28.294 PCIE (0000:00:06.0) NSID 1 from core 0: 52480.00 615.00 2437.16 1291.54 7287.23 00:30:28.294 ======================================================== 00:30:28.294 Total : 52480.00 615.00 2437.16 1291.54 7287.23 00:30:28.294 00:30:28.294 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:28.294 ================================================================================= 00:30:28.294 1.00000% : 1496.902us 00:30:28.294 10.00000% : 1712.873us 00:30:28.294 25.00000% : 1980.975us 00:30:28.294 50.00000% : 2427.811us 00:30:28.294 75.00000% : 2874.647us 00:30:28.294 90.00000% : 3142.749us 00:30:28.294 95.00000% : 3291.695us 00:30:28.294 98.00000% : 3530.007us 00:30:28.294 99.00000% : 3649.164us 00:30:28.294 99.50000% : 3798.109us 00:30:28.294 99.90000% : 5570.560us 00:30:28.295 99.99000% : 7060.015us 00:30:28.295 99.99900% : 7298.327us 00:30:28.295 99.99990% : 7298.327us 00:30:28.295 99.99999% : 7298.327us 00:30:28.295 00:30:28.295 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:28.295 ============================================================================== 00:30:28.295 Range in us Cumulative IO count 00:30:28.295 1288.378 - 1295.825: 0.0019% ( 1) 00:30:28.295 1295.825 - 1303.273: 0.0057% ( 2) 00:30:28.295 1303.273 - 1310.720: 0.0076% ( 1) 00:30:28.295 1310.720 - 1318.167: 0.0095% ( 1) 00:30:28.295 1318.167 - 1325.615: 0.0114% ( 1) 00:30:28.295 1325.615 - 1333.062: 0.0133% ( 1) 00:30:28.295 1333.062 - 1340.509: 0.0171% ( 2) 00:30:28.295 1340.509 - 1347.956: 0.0191% ( 1) 00:30:28.295 1347.956 - 1355.404: 0.0286% ( 5) 00:30:28.295 1355.404 - 1362.851: 0.0343% ( 3) 00:30:28.295 1362.851 - 1370.298: 0.0419% ( 4) 00:30:28.295 1370.298 - 1377.745: 0.0534% ( 6) 00:30:28.295 1377.745 - 1385.193: 0.0667% ( 7) 00:30:28.295 1385.193 - 1392.640: 0.0781% ( 6) 00:30:28.295 1392.640 - 1400.087: 0.0972% ( 10) 00:30:28.295 1400.087 - 1407.535: 0.1181% ( 11) 00:30:28.295 1407.535 - 1414.982: 0.1353% ( 9) 00:30:28.295 1414.982 - 1422.429: 0.1524% ( 9) 00:30:28.295 1422.429 - 1429.876: 0.1886% ( 19) 00:30:28.295 1429.876 - 1437.324: 0.2287% ( 21) 00:30:28.295 1437.324 - 1444.771: 0.2820% ( 28) 00:30:28.295 1444.771 - 1452.218: 0.3582% ( 40) 00:30:28.295 1452.218 - 1459.665: 0.4421% ( 44) 00:30:28.295 1459.665 - 1467.113: 0.5202% ( 41) 00:30:28.295 1467.113 - 1474.560: 0.6269% ( 56) 00:30:28.295 1474.560 - 1482.007: 0.7431% ( 61) 00:30:28.295 1482.007 - 1489.455: 0.8765% ( 70) 00:30:28.295 1489.455 - 1496.902: 1.0366% ( 84) 00:30:28.295 1496.902 - 1504.349: 1.1986% ( 85) 00:30:28.295 1504.349 - 1511.796: 1.3777% ( 94) 00:30:28.295 1511.796 - 1519.244: 1.5873% ( 110) 00:30:28.295 1519.244 - 1526.691: 1.7664% ( 94) 00:30:28.295 1526.691 - 1534.138: 1.9874% ( 116) 00:30:28.295 1534.138 - 1541.585: 2.2123% ( 118) 00:30:28.295 1541.585 - 1549.033: 2.4638% ( 132) 00:30:28.295 1549.033 - 1556.480: 2.6925% ( 120) 00:30:28.295 1556.480 - 1563.927: 2.9707% ( 146) 00:30:28.295 1563.927 - 1571.375: 3.2355% ( 139) 00:30:28.295 1571.375 - 1578.822: 3.5423% ( 161) 00:30:28.295 1578.822 - 1586.269: 3.8586% ( 166) 00:30:28.295 1586.269 - 1593.716: 4.1711% ( 164) 00:30:28.295 1593.716 - 1601.164: 4.4912% ( 168) 00:30:28.295 1601.164 - 1608.611: 4.8114% ( 168) 00:30:28.295 1608.611 - 1616.058: 5.1582% ( 182) 00:30:28.295 1616.058 - 1623.505: 5.5259% ( 193) 00:30:28.295 1623.505 - 1630.953: 5.8479% ( 169) 00:30:28.295 1630.953 - 1638.400: 6.2100% ( 190) 00:30:28.295 1638.400 - 1645.847: 6.5949% ( 202) 00:30:28.295 1645.847 - 1653.295: 6.9817% ( 203) 00:30:28.295 1653.295 - 1660.742: 7.3723% ( 205) 00:30:28.295 1660.742 - 1668.189: 7.7534% ( 200) 00:30:28.295 1668.189 - 1675.636: 8.1479% ( 207) 00:30:28.295 1675.636 - 1683.084: 8.5575% ( 215) 00:30:28.295 1683.084 - 1690.531: 8.9463% ( 204) 00:30:28.295 1690.531 - 1697.978: 9.3731% ( 224) 00:30:28.295 1697.978 - 1705.425: 9.7732% ( 210) 00:30:28.295 1705.425 - 1712.873: 10.1658% ( 206) 00:30:28.295 1712.873 - 1720.320: 10.5831% ( 219) 00:30:28.295 1720.320 - 1727.767: 10.9699% ( 203) 00:30:28.295 1727.767 - 1735.215: 11.4024% ( 227) 00:30:28.295 1735.215 - 1742.662: 11.7912% ( 204) 00:30:28.295 1742.662 - 1750.109: 12.2218% ( 226) 00:30:28.295 1750.109 - 1757.556: 12.6296% ( 214) 00:30:28.295 1757.556 - 1765.004: 13.0564% ( 224) 00:30:28.295 1765.004 - 1772.451: 13.4661% ( 215) 00:30:28.295 1772.451 - 1779.898: 13.8853% ( 220) 00:30:28.295 1779.898 - 1787.345: 14.2950% ( 215) 00:30:28.295 1787.345 - 1794.793: 14.7046% ( 215) 00:30:28.295 1794.793 - 1802.240: 15.1353% ( 226) 00:30:28.295 1802.240 - 1809.687: 15.5469% ( 216) 00:30:28.295 1809.687 - 1817.135: 15.9756% ( 225) 00:30:28.295 1817.135 - 1824.582: 16.3891% ( 217) 00:30:28.295 1824.582 - 1832.029: 16.8216% ( 227) 00:30:28.295 1832.029 - 1839.476: 17.2294% ( 214) 00:30:28.295 1839.476 - 1846.924: 17.6696% ( 231) 00:30:28.295 1846.924 - 1854.371: 18.0736% ( 212) 00:30:28.295 1854.371 - 1861.818: 18.4794% ( 213) 00:30:28.295 1861.818 - 1869.265: 18.8910% ( 216) 00:30:28.295 1869.265 - 1876.713: 19.3026% ( 216) 00:30:28.295 1876.713 - 1884.160: 19.7447% ( 232) 00:30:28.295 1884.160 - 1891.607: 20.1410% ( 208) 00:30:28.295 1891.607 - 1899.055: 20.5393% ( 209) 00:30:28.295 1899.055 - 1906.502: 20.9870% ( 235) 00:30:28.295 1906.502 - 1921.396: 21.8293% ( 442) 00:30:28.295 1921.396 - 1936.291: 22.6315% ( 421) 00:30:28.295 1936.291 - 1951.185: 23.4966% ( 454) 00:30:28.295 1951.185 - 1966.080: 24.3274% ( 436) 00:30:28.295 1966.080 - 1980.975: 25.1524% ( 433) 00:30:28.295 1980.975 - 1995.869: 25.9566% ( 422) 00:30:28.295 1995.869 - 2010.764: 26.7988% ( 442) 00:30:28.295 2010.764 - 2025.658: 27.6410% ( 442) 00:30:28.295 2025.658 - 2040.553: 28.4832% ( 442) 00:30:28.295 2040.553 - 2055.447: 29.3121% ( 435) 00:30:28.295 2055.447 - 2070.342: 30.1448% ( 437) 00:30:28.295 2070.342 - 2085.236: 30.9889% ( 443) 00:30:28.295 2085.236 - 2100.131: 31.8274% ( 440) 00:30:28.295 2100.131 - 2115.025: 32.6601% ( 437) 00:30:28.295 2115.025 - 2129.920: 33.4928% ( 437) 00:30:28.295 2129.920 - 2144.815: 34.3445% ( 447) 00:30:28.295 2144.815 - 2159.709: 35.1715% ( 434) 00:30:28.295 2159.709 - 2174.604: 36.0061% ( 438) 00:30:28.295 2174.604 - 2189.498: 36.8236% ( 429) 00:30:28.295 2189.498 - 2204.393: 37.6658% ( 442) 00:30:28.295 2204.393 - 2219.287: 38.5080% ( 442) 00:30:28.295 2219.287 - 2234.182: 39.3293% ( 431) 00:30:28.295 2234.182 - 2249.076: 40.1486% ( 430) 00:30:28.295 2249.076 - 2263.971: 40.9756% ( 434) 00:30:28.295 2263.971 - 2278.865: 41.7912% ( 428) 00:30:28.295 2278.865 - 2293.760: 42.6315% ( 441) 00:30:28.295 2293.760 - 2308.655: 43.4851% ( 448) 00:30:28.295 2308.655 - 2323.549: 44.3331% ( 445) 00:30:28.295 2323.549 - 2338.444: 45.1562% ( 432) 00:30:28.295 2338.444 - 2353.338: 46.0042% ( 445) 00:30:28.295 2353.338 - 2368.233: 46.8731% ( 456) 00:30:28.295 2368.233 - 2383.127: 47.7306% ( 450) 00:30:28.295 2383.127 - 2398.022: 48.5595% ( 435) 00:30:28.295 2398.022 - 2412.916: 49.3902% ( 436) 00:30:28.295 2412.916 - 2427.811: 50.2458% ( 449) 00:30:28.295 2427.811 - 2442.705: 51.0747% ( 435) 00:30:28.295 2442.705 - 2457.600: 51.9284% ( 448) 00:30:28.295 2457.600 - 2472.495: 52.7706% ( 442) 00:30:28.295 2472.495 - 2487.389: 53.6261% ( 449) 00:30:28.295 2487.389 - 2502.284: 54.4607% ( 438) 00:30:28.295 2502.284 - 2517.178: 55.2725% ( 426) 00:30:28.295 2517.178 - 2532.073: 56.0957% ( 432) 00:30:28.295 2532.073 - 2546.967: 56.9455% ( 446) 00:30:28.295 2546.967 - 2561.862: 57.7744% ( 435) 00:30:28.295 2561.862 - 2576.756: 58.6147% ( 441) 00:30:28.295 2576.756 - 2591.651: 59.4531% ( 440) 00:30:28.295 2591.651 - 2606.545: 60.2763% ( 432) 00:30:28.295 2606.545 - 2621.440: 61.0899% ( 427) 00:30:28.295 2621.440 - 2636.335: 61.9646% ( 459) 00:30:28.295 2636.335 - 2651.229: 62.7915% ( 434) 00:30:28.295 2651.229 - 2666.124: 63.6471% ( 449) 00:30:28.295 2666.124 - 2681.018: 64.5179% ( 457) 00:30:28.295 2681.018 - 2695.913: 65.3582% ( 441) 00:30:28.295 2695.913 - 2710.807: 66.2271% ( 456) 00:30:28.295 2710.807 - 2725.702: 67.0770% ( 446) 00:30:28.295 2725.702 - 2740.596: 67.9021% ( 433) 00:30:28.295 2740.596 - 2755.491: 68.7614% ( 451) 00:30:28.295 2755.491 - 2770.385: 69.6208% ( 451) 00:30:28.295 2770.385 - 2785.280: 70.4878% ( 455) 00:30:28.295 2785.280 - 2800.175: 71.3567% ( 456) 00:30:28.295 2800.175 - 2815.069: 72.1932% ( 439) 00:30:28.295 2815.069 - 2829.964: 73.0526% ( 451) 00:30:28.295 2829.964 - 2844.858: 73.8967% ( 443) 00:30:28.295 2844.858 - 2859.753: 74.7294% ( 437) 00:30:28.295 2859.753 - 2874.647: 75.5545% ( 433) 00:30:28.295 2874.647 - 2889.542: 76.4710% ( 481) 00:30:28.295 2889.542 - 2904.436: 77.3171% ( 444) 00:30:28.295 2904.436 - 2919.331: 78.1631% ( 444) 00:30:28.295 2919.331 - 2934.225: 79.0149% ( 447) 00:30:28.295 2934.225 - 2949.120: 79.8819% ( 455) 00:30:28.295 2949.120 - 2964.015: 80.7260% ( 443) 00:30:28.295 2964.015 - 2978.909: 81.5511% ( 433) 00:30:28.295 2978.909 - 2993.804: 82.3838% ( 437) 00:30:28.295 2993.804 - 3008.698: 83.2146% ( 436) 00:30:28.295 3008.698 - 3023.593: 84.0434% ( 435) 00:30:28.295 3023.593 - 3038.487: 84.8742% ( 436) 00:30:28.295 3038.487 - 3053.382: 85.6841% ( 425) 00:30:28.295 3053.382 - 3068.276: 86.4939% ( 425) 00:30:28.295 3068.276 - 3083.171: 87.2504% ( 397) 00:30:28.295 3083.171 - 3098.065: 88.0450% ( 417) 00:30:28.295 3098.065 - 3112.960: 88.7881% ( 390) 00:30:28.295 3112.960 - 3127.855: 89.5274% ( 388) 00:30:28.295 3127.855 - 3142.749: 90.2401% ( 374) 00:30:28.295 3142.749 - 3157.644: 90.8975% ( 345) 00:30:28.295 3157.644 - 3172.538: 91.5320% ( 333) 00:30:28.295 3172.538 - 3187.433: 92.1303% ( 314) 00:30:28.295 3187.433 - 3202.327: 92.6696% ( 283) 00:30:28.295 3202.327 - 3217.222: 93.1612% ( 258) 00:30:28.295 3217.222 - 3232.116: 93.6242% ( 243) 00:30:28.295 3232.116 - 3247.011: 94.0358% ( 216) 00:30:28.295 3247.011 - 3261.905: 94.4245% ( 204) 00:30:28.295 3261.905 - 3276.800: 94.7790% ( 186) 00:30:28.295 3276.800 - 3291.695: 95.0991% ( 168) 00:30:28.295 3291.695 - 3306.589: 95.3868% ( 151) 00:30:28.295 3306.589 - 3321.484: 95.6441% ( 135) 00:30:28.295 3321.484 - 3336.378: 95.8880% ( 128) 00:30:28.295 3336.378 - 3351.273: 96.0823% ( 102) 00:30:28.295 3351.273 - 3366.167: 96.2843% ( 106) 00:30:28.295 3366.167 - 3381.062: 96.4787% ( 102) 00:30:28.296 3381.062 - 3395.956: 96.6749% ( 103) 00:30:28.296 3395.956 - 3410.851: 96.8540% ( 94) 00:30:28.296 3410.851 - 3425.745: 97.0198% ( 87) 00:30:28.296 3425.745 - 3440.640: 97.1665% ( 77) 00:30:28.296 3440.640 - 3455.535: 97.3247% ( 83) 00:30:28.296 3455.535 - 3470.429: 97.4733% ( 78) 00:30:28.296 3470.429 - 3485.324: 97.6239% ( 79) 00:30:28.296 3485.324 - 3500.218: 97.7763% ( 80) 00:30:28.296 3500.218 - 3515.113: 97.9325% ( 82) 00:30:28.296 3515.113 - 3530.007: 98.0793% ( 77) 00:30:28.296 3530.007 - 3544.902: 98.2222% ( 75) 00:30:28.296 3544.902 - 3559.796: 98.3670% ( 76) 00:30:28.296 3559.796 - 3574.691: 98.5004% ( 70) 00:30:28.296 3574.691 - 3589.585: 98.6338% ( 70) 00:30:28.296 3589.585 - 3604.480: 98.7500% ( 61) 00:30:28.296 3604.480 - 3619.375: 98.8529% ( 54) 00:30:28.296 3619.375 - 3634.269: 98.9634% ( 58) 00:30:28.296 3634.269 - 3649.164: 99.0434% ( 42) 00:30:28.296 3649.164 - 3664.058: 99.1082% ( 34) 00:30:28.296 3664.058 - 3678.953: 99.1730% ( 34) 00:30:28.296 3678.953 - 3693.847: 99.2340% ( 32) 00:30:28.296 3693.847 - 3708.742: 99.2912% ( 30) 00:30:28.296 3708.742 - 3723.636: 99.3350% ( 23) 00:30:28.296 3723.636 - 3738.531: 99.3788% ( 23) 00:30:28.296 3738.531 - 3753.425: 99.4245% ( 24) 00:30:28.296 3753.425 - 3768.320: 99.4646% ( 21) 00:30:28.296 3768.320 - 3783.215: 99.4970% ( 17) 00:30:28.296 3783.215 - 3798.109: 99.5255% ( 15) 00:30:28.296 3798.109 - 3813.004: 99.5484% ( 12) 00:30:28.296 3813.004 - 3842.793: 99.5827% ( 18) 00:30:28.296 3842.793 - 3872.582: 99.6208% ( 20) 00:30:28.296 3872.582 - 3902.371: 99.6437% ( 12) 00:30:28.296 3902.371 - 3932.160: 99.6684% ( 13) 00:30:28.296 3932.160 - 3961.949: 99.6856% ( 9) 00:30:28.296 3961.949 - 3991.738: 99.6989% ( 7) 00:30:28.296 3991.738 - 4021.527: 99.7123% ( 7) 00:30:28.296 4021.527 - 4051.316: 99.7180% ( 3) 00:30:28.296 4051.316 - 4081.105: 99.7275% ( 5) 00:30:28.296 4081.105 - 4110.895: 99.7351% ( 4) 00:30:28.296 4110.895 - 4140.684: 99.7447% ( 5) 00:30:28.296 4140.684 - 4170.473: 99.7523% ( 4) 00:30:28.296 4170.473 - 4200.262: 99.7618% ( 5) 00:30:28.296 4200.262 - 4230.051: 99.7694% ( 4) 00:30:28.296 4230.051 - 4259.840: 99.7790% ( 5) 00:30:28.296 4259.840 - 4289.629: 99.7885% ( 5) 00:30:28.296 4289.629 - 4319.418: 99.7980% ( 5) 00:30:28.296 4319.418 - 4349.207: 99.8056% ( 4) 00:30:28.296 4349.207 - 4378.996: 99.8133% ( 4) 00:30:28.296 4378.996 - 4408.785: 99.8228% ( 5) 00:30:28.296 4408.785 - 4438.575: 99.8285% ( 3) 00:30:28.296 4438.575 - 4468.364: 99.8342% ( 3) 00:30:28.296 4498.153 - 4527.942: 99.8361% ( 1) 00:30:28.296 4527.942 - 4557.731: 99.8380% ( 1) 00:30:28.296 4557.731 - 4587.520: 99.8399% ( 1) 00:30:28.296 4587.520 - 4617.309: 99.8418% ( 1) 00:30:28.296 4617.309 - 4647.098: 99.8438% ( 1) 00:30:28.296 4647.098 - 4676.887: 99.8457% ( 1) 00:30:28.296 4676.887 - 4706.676: 99.8476% ( 1) 00:30:28.296 4706.676 - 4736.465: 99.8495% ( 1) 00:30:28.296 4736.465 - 4766.255: 99.8514% ( 1) 00:30:28.296 4766.255 - 4796.044: 99.8533% ( 1) 00:30:28.296 4796.044 - 4825.833: 99.8552% ( 1) 00:30:28.296 4825.833 - 4855.622: 99.8571% ( 1) 00:30:28.296 4855.622 - 4885.411: 99.8590% ( 1) 00:30:28.296 4885.411 - 4915.200: 99.8609% ( 1) 00:30:28.296 4915.200 - 4944.989: 99.8628% ( 1) 00:30:28.296 4944.989 - 4974.778: 99.8647% ( 1) 00:30:28.296 4974.778 - 5004.567: 99.8666% ( 1) 00:30:28.296 5004.567 - 5034.356: 99.8685% ( 1) 00:30:28.296 5034.356 - 5064.145: 99.8704% ( 1) 00:30:28.296 5064.145 - 5093.935: 99.8723% ( 1) 00:30:28.296 5093.935 - 5123.724: 99.8742% ( 1) 00:30:28.296 5123.724 - 5153.513: 99.8761% ( 1) 00:30:28.296 5153.513 - 5183.302: 99.8780% ( 1) 00:30:28.296 5213.091 - 5242.880: 99.8819% ( 2) 00:30:28.296 5272.669 - 5302.458: 99.8838% ( 1) 00:30:28.296 5302.458 - 5332.247: 99.8857% ( 1) 00:30:28.296 5332.247 - 5362.036: 99.8876% ( 1) 00:30:28.296 5362.036 - 5391.825: 99.8895% ( 1) 00:30:28.296 5391.825 - 5421.615: 99.8914% ( 1) 00:30:28.296 5421.615 - 5451.404: 99.8933% ( 1) 00:30:28.296 5451.404 - 5481.193: 99.8952% ( 1) 00:30:28.296 5481.193 - 5510.982: 99.8971% ( 1) 00:30:28.296 5510.982 - 5540.771: 99.8990% ( 1) 00:30:28.296 5540.771 - 5570.560: 99.9009% ( 1) 00:30:28.296 5570.560 - 5600.349: 99.9028% ( 1) 00:30:28.296 5600.349 - 5630.138: 99.9047% ( 1) 00:30:28.296 5630.138 - 5659.927: 99.9066% ( 1) 00:30:28.296 5659.927 - 5689.716: 99.9085% ( 1) 00:30:28.296 5689.716 - 5719.505: 99.9104% ( 1) 00:30:28.296 5749.295 - 5779.084: 99.9123% ( 1) 00:30:28.296 5779.084 - 5808.873: 99.9143% ( 1) 00:30:28.296 5808.873 - 5838.662: 99.9162% ( 1) 00:30:28.296 5838.662 - 5868.451: 99.9181% ( 1) 00:30:28.296 5868.451 - 5898.240: 99.9200% ( 1) 00:30:28.296 5898.240 - 5928.029: 99.9219% ( 1) 00:30:28.296 5928.029 - 5957.818: 99.9238% ( 1) 00:30:28.296 5957.818 - 5987.607: 99.9257% ( 1) 00:30:28.296 6017.396 - 6047.185: 99.9276% ( 1) 00:30:28.296 6047.185 - 6076.975: 99.9295% ( 1) 00:30:28.296 6076.975 - 6106.764: 99.9314% ( 1) 00:30:28.296 6106.764 - 6136.553: 99.9333% ( 1) 00:30:28.296 6136.553 - 6166.342: 99.9352% ( 1) 00:30:28.296 6166.342 - 6196.131: 99.9371% ( 1) 00:30:28.296 6196.131 - 6225.920: 99.9390% ( 1) 00:30:28.296 6225.920 - 6255.709: 99.9409% ( 1) 00:30:28.296 6255.709 - 6285.498: 99.9428% ( 1) 00:30:28.296 6285.498 - 6315.287: 99.9447% ( 1) 00:30:28.296 6315.287 - 6345.076: 99.9466% ( 1) 00:30:28.296 6345.076 - 6374.865: 99.9486% ( 1) 00:30:28.296 6374.865 - 6404.655: 99.9505% ( 1) 00:30:28.296 6434.444 - 6464.233: 99.9543% ( 2) 00:30:28.296 6494.022 - 6523.811: 99.9562% ( 1) 00:30:28.296 6523.811 - 6553.600: 99.9581% ( 1) 00:30:28.296 6553.600 - 6583.389: 99.9600% ( 1) 00:30:28.296 6583.389 - 6613.178: 99.9619% ( 1) 00:30:28.296 6613.178 - 6642.967: 99.9638% ( 1) 00:30:28.296 6642.967 - 6672.756: 99.9657% ( 1) 00:30:28.296 6672.756 - 6702.545: 99.9676% ( 1) 00:30:28.296 6702.545 - 6732.335: 99.9695% ( 1) 00:30:28.296 6732.335 - 6762.124: 99.9714% ( 1) 00:30:28.296 6762.124 - 6791.913: 99.9733% ( 1) 00:30:28.296 6791.913 - 6821.702: 99.9752% ( 1) 00:30:28.296 6821.702 - 6851.491: 99.9771% ( 1) 00:30:28.296 6851.491 - 6881.280: 99.9790% ( 1) 00:30:28.296 6881.280 - 6911.069: 99.9809% ( 1) 00:30:28.296 6911.069 - 6940.858: 99.9829% ( 1) 00:30:28.296 6940.858 - 6970.647: 99.9848% ( 1) 00:30:28.296 6970.647 - 7000.436: 99.9867% ( 1) 00:30:28.296 7000.436 - 7030.225: 99.9886% ( 1) 00:30:28.296 7030.225 - 7060.015: 99.9905% ( 1) 00:30:28.296 7060.015 - 7089.804: 99.9924% ( 1) 00:30:28.296 7089.804 - 7119.593: 99.9943% ( 1) 00:30:28.296 7119.593 - 7149.382: 99.9962% ( 1) 00:30:28.296 7179.171 - 7208.960: 99.9981% ( 1) 00:30:28.296 7268.538 - 7298.327: 100.0000% ( 1) 00:30:28.296 00:30:28.296 13:15:46 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:29.669 Initializing NVMe Controllers 00:30:29.669 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:29.669 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:29.669 Initialization complete. Launching workers. 00:30:29.669 ======================================================== 00:30:29.669 Latency(us) 00:30:29.669 Device Information : IOPS MiB/s Average min max 00:30:29.669 PCIE (0000:00:06.0) NSID 1 from core 0: 54588.50 639.71 2344.38 1141.49 8860.42 00:30:29.669 ======================================================== 00:30:29.669 Total : 54588.50 639.71 2344.38 1141.49 8860.42 00:30:29.669 00:30:29.669 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:29.669 ================================================================================= 00:30:29.669 1.00000% : 1630.953us 00:30:29.669 10.00000% : 1921.396us 00:30:29.670 25.00000% : 2085.236us 00:30:29.670 50.00000% : 2278.865us 00:30:29.670 75.00000% : 2532.073us 00:30:29.670 90.00000% : 2889.542us 00:30:29.670 95.00000% : 3112.960us 00:30:29.670 98.00000% : 3440.640us 00:30:29.670 99.00000% : 3783.215us 00:30:29.670 99.50000% : 4051.316us 00:30:29.670 99.90000% : 4944.989us 00:30:29.670 99.99000% : 6345.076us 00:30:29.670 99.99900% : 8877.149us 00:30:29.670 99.99990% : 8877.149us 00:30:29.670 99.99999% : 8877.149us 00:30:29.670 00:30:29.670 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:29.670 ============================================================================== 00:30:29.670 Range in us Cumulative IO count 00:30:29.670 1139.433 - 1146.880: 0.0018% ( 1) 00:30:29.670 1251.142 - 1258.589: 0.0037% ( 1) 00:30:29.670 1258.589 - 1266.036: 0.0073% ( 2) 00:30:29.670 1273.484 - 1280.931: 0.0092% ( 1) 00:30:29.670 1303.273 - 1310.720: 0.0110% ( 1) 00:30:29.670 1310.720 - 1318.167: 0.0128% ( 1) 00:30:29.670 1318.167 - 1325.615: 0.0220% ( 5) 00:30:29.670 1325.615 - 1333.062: 0.0293% ( 4) 00:30:29.670 1347.956 - 1355.404: 0.0311% ( 1) 00:30:29.670 1355.404 - 1362.851: 0.0403% ( 5) 00:30:29.670 1362.851 - 1370.298: 0.0421% ( 1) 00:30:29.670 1370.298 - 1377.745: 0.0476% ( 3) 00:30:29.670 1377.745 - 1385.193: 0.0494% ( 1) 00:30:29.670 1385.193 - 1392.640: 0.0549% ( 3) 00:30:29.670 1392.640 - 1400.087: 0.0604% ( 3) 00:30:29.670 1400.087 - 1407.535: 0.0622% ( 1) 00:30:29.670 1407.535 - 1414.982: 0.0787% ( 9) 00:30:29.670 1414.982 - 1422.429: 0.0860% ( 4) 00:30:29.670 1422.429 - 1429.876: 0.1007% ( 8) 00:30:29.670 1429.876 - 1437.324: 0.1080% ( 4) 00:30:29.670 1437.324 - 1444.771: 0.1172% ( 5) 00:30:29.670 1444.771 - 1452.218: 0.1281% ( 6) 00:30:29.670 1452.218 - 1459.665: 0.1391% ( 6) 00:30:29.670 1459.665 - 1467.113: 0.1556% ( 9) 00:30:29.670 1467.113 - 1474.560: 0.1721% ( 9) 00:30:29.670 1474.560 - 1482.007: 0.2160% ( 24) 00:30:29.670 1482.007 - 1489.455: 0.2361% ( 11) 00:30:29.670 1489.455 - 1496.902: 0.2947% ( 32) 00:30:29.670 1496.902 - 1504.349: 0.3203% ( 14) 00:30:29.670 1504.349 - 1511.796: 0.3313% ( 6) 00:30:29.670 1511.796 - 1519.244: 0.3515% ( 11) 00:30:29.670 1519.244 - 1526.691: 0.3753% ( 13) 00:30:29.670 1526.691 - 1534.138: 0.3917% ( 9) 00:30:29.670 1534.138 - 1541.585: 0.4119% ( 11) 00:30:29.670 1541.585 - 1549.033: 0.4302% ( 10) 00:30:29.670 1549.033 - 1556.480: 0.4613% ( 17) 00:30:29.670 1556.480 - 1563.927: 0.4942% ( 18) 00:30:29.670 1563.927 - 1571.375: 0.5217% ( 15) 00:30:29.670 1571.375 - 1578.822: 0.5583% ( 20) 00:30:29.670 1578.822 - 1586.269: 0.5986% ( 22) 00:30:29.670 1586.269 - 1593.716: 0.6443% ( 25) 00:30:29.670 1593.716 - 1601.164: 0.7194% ( 41) 00:30:29.670 1601.164 - 1608.611: 0.8420% ( 67) 00:30:29.670 1608.611 - 1616.058: 0.9208% ( 43) 00:30:29.670 1616.058 - 1623.505: 0.9867% ( 36) 00:30:29.670 1623.505 - 1630.953: 1.1441% ( 86) 00:30:29.670 1630.953 - 1638.400: 1.2265% ( 45) 00:30:29.670 1638.400 - 1645.847: 1.3363% ( 60) 00:30:29.670 1645.847 - 1653.295: 1.4608% ( 68) 00:30:29.670 1653.295 - 1660.742: 1.5340% ( 40) 00:30:29.670 1660.742 - 1668.189: 1.5999% ( 36) 00:30:29.670 1668.189 - 1675.636: 1.6658% ( 36) 00:30:29.670 1675.636 - 1683.084: 1.7299% ( 35) 00:30:29.670 1683.084 - 1690.531: 1.8159% ( 47) 00:30:29.670 1690.531 - 1697.978: 1.9092% ( 51) 00:30:29.670 1697.978 - 1705.425: 2.0905% ( 99) 00:30:29.670 1705.425 - 1712.873: 2.1820% ( 50) 00:30:29.670 1712.873 - 1720.320: 2.2991% ( 64) 00:30:29.670 1720.320 - 1727.767: 2.4273% ( 70) 00:30:29.670 1727.767 - 1735.215: 2.5994% ( 94) 00:30:29.670 1735.215 - 1742.662: 2.7476% ( 81) 00:30:29.670 1742.662 - 1750.109: 2.9417% ( 106) 00:30:29.670 1750.109 - 1757.556: 3.1027% ( 88) 00:30:29.670 1757.556 - 1765.004: 3.3059% ( 111) 00:30:29.670 1765.004 - 1772.451: 3.4835% ( 97) 00:30:29.670 1772.451 - 1779.898: 3.7379% ( 139) 00:30:29.670 1779.898 - 1787.345: 3.9704% ( 127) 00:30:29.670 1787.345 - 1794.793: 4.2578% ( 157) 00:30:29.670 1794.793 - 1802.240: 4.4994% ( 132) 00:30:29.670 1802.240 - 1809.687: 4.7154% ( 118) 00:30:29.670 1809.687 - 1817.135: 4.9516% ( 129) 00:30:29.670 1817.135 - 1824.582: 5.2170% ( 145) 00:30:29.670 1824.582 - 1832.029: 5.5227% ( 167) 00:30:29.670 1832.029 - 1839.476: 5.8412% ( 174) 00:30:29.670 1839.476 - 1846.924: 6.1195% ( 152) 00:30:29.670 1846.924 - 1854.371: 6.4673% ( 190) 00:30:29.670 1854.371 - 1861.818: 6.8279% ( 197) 00:30:29.670 1861.818 - 1869.265: 7.2544% ( 233) 00:30:29.670 1869.265 - 1876.713: 7.7047% ( 246) 00:30:29.670 1876.713 - 1884.160: 8.1440% ( 240) 00:30:29.670 1884.160 - 1891.607: 8.5980% ( 248) 00:30:29.670 1891.607 - 1899.055: 9.1673% ( 311) 00:30:29.670 1899.055 - 1906.502: 9.8080% ( 350) 00:30:29.670 1906.502 - 1921.396: 10.9521% ( 625) 00:30:29.670 1921.396 - 1936.291: 12.1236% ( 640) 00:30:29.670 1936.291 - 1951.185: 13.2292% ( 604) 00:30:29.670 1951.185 - 1966.080: 14.2745% ( 571) 00:30:29.670 1966.080 - 1980.975: 15.5668% ( 706) 00:30:29.670 1980.975 - 1995.869: 16.7841% ( 665) 00:30:29.670 1995.869 - 2010.764: 18.1021% ( 720) 00:30:29.670 2010.764 - 2025.658: 19.5720% ( 803) 00:30:29.670 2025.658 - 2040.553: 21.0895% ( 829) 00:30:29.670 2040.553 - 2055.447: 22.8413% ( 957) 00:30:29.670 2055.447 - 2070.342: 24.6316% ( 978) 00:30:29.670 2070.342 - 2085.236: 26.3560% ( 942) 00:30:29.670 2085.236 - 2100.131: 28.3769% ( 1104) 00:30:29.670 2100.131 - 2115.025: 30.4033% ( 1107) 00:30:29.670 2115.025 - 2129.920: 32.2429% ( 1005) 00:30:29.670 2129.920 - 2144.815: 34.2822% ( 1114) 00:30:29.670 2144.815 - 2159.709: 36.3488% ( 1129) 00:30:29.670 2159.709 - 2174.604: 38.5345% ( 1194) 00:30:29.670 2174.604 - 2189.498: 40.3339% ( 983) 00:30:29.670 2189.498 - 2204.393: 42.2102% ( 1025) 00:30:29.670 2204.393 - 2219.287: 43.9089% ( 928) 00:30:29.670 2219.287 - 2234.182: 45.5893% ( 918) 00:30:29.670 2234.182 - 2249.076: 47.0977% ( 824) 00:30:29.670 2249.076 - 2263.971: 48.8440% ( 954) 00:30:29.670 2263.971 - 2278.865: 50.6416% ( 982) 00:30:29.670 2278.865 - 2293.760: 52.6552% ( 1100) 00:30:29.670 2293.760 - 2308.655: 54.4802% ( 997) 00:30:29.670 2308.655 - 2323.549: 56.5542% ( 1133) 00:30:29.670 2323.549 - 2338.444: 58.1431% ( 868) 00:30:29.670 2338.444 - 2353.338: 59.6057% ( 799) 00:30:29.670 2353.338 - 2368.233: 61.1324% ( 834) 00:30:29.670 2368.233 - 2383.127: 62.8640% ( 946) 00:30:29.670 2383.127 - 2398.022: 64.5042% ( 896) 00:30:29.670 2398.022 - 2412.916: 66.0693% ( 855) 00:30:29.670 2412.916 - 2427.811: 67.2701% ( 656) 00:30:29.670 2427.811 - 2442.705: 68.4600% ( 650) 00:30:29.670 2442.705 - 2457.600: 69.6132% ( 630) 00:30:29.670 2457.600 - 2472.495: 70.8506% ( 676) 00:30:29.670 2472.495 - 2487.389: 72.1009% ( 683) 00:30:29.670 2487.389 - 2502.284: 73.1736% ( 586) 00:30:29.670 2502.284 - 2517.178: 74.1200% ( 517) 00:30:29.670 2517.178 - 2532.073: 75.0664% ( 517) 00:30:29.670 2532.073 - 2546.967: 76.1299% ( 581) 00:30:29.670 2546.967 - 2561.862: 76.9481% ( 447) 00:30:29.670 2561.862 - 2576.756: 77.7646% ( 446) 00:30:29.670 2576.756 - 2591.651: 78.6341% ( 475) 00:30:29.670 2591.651 - 2606.545: 79.3882% ( 412) 00:30:29.670 2606.545 - 2621.440: 80.0729% ( 374) 00:30:29.670 2621.440 - 2636.335: 80.8069% ( 401) 00:30:29.670 2636.335 - 2651.229: 81.4567% ( 355) 00:30:29.670 2651.229 - 2666.124: 82.1084% ( 356) 00:30:29.670 2666.124 - 2681.018: 82.8168% ( 387) 00:30:29.670 2681.018 - 2695.913: 83.4410% ( 341) 00:30:29.670 2695.913 - 2710.807: 84.0652% ( 341) 00:30:29.670 2710.807 - 2725.702: 84.6711% ( 331) 00:30:29.670 2725.702 - 2740.596: 85.2276% ( 304) 00:30:29.670 2740.596 - 2755.491: 85.8225% ( 325) 00:30:29.670 2755.491 - 2770.385: 86.3772% ( 303) 00:30:29.670 2770.385 - 2785.280: 86.9044% ( 288) 00:30:29.670 2785.280 - 2800.175: 87.4206% ( 282) 00:30:29.670 2800.175 - 2815.069: 87.8984% ( 261) 00:30:29.670 2815.069 - 2829.964: 88.3816% ( 264) 00:30:29.670 2829.964 - 2844.858: 88.8576% ( 260) 00:30:29.670 2844.858 - 2859.753: 89.3006% ( 242) 00:30:29.670 2859.753 - 2874.647: 89.7637% ( 253) 00:30:29.670 2874.647 - 2889.542: 90.1847% ( 230) 00:30:29.670 2889.542 - 2904.436: 90.6002% ( 227) 00:30:29.670 2904.436 - 2919.331: 91.0231% ( 231) 00:30:29.670 2919.331 - 2934.225: 91.4258% ( 220) 00:30:29.670 2934.225 - 2949.120: 91.8395% ( 226) 00:30:29.670 2949.120 - 2964.015: 92.2477% ( 223) 00:30:29.670 2964.015 - 2978.909: 92.6138% ( 200) 00:30:29.670 2978.909 - 2993.804: 92.9506% ( 184) 00:30:29.670 2993.804 - 3008.698: 93.2948% ( 188) 00:30:29.670 3008.698 - 3023.593: 93.6078% ( 171) 00:30:29.670 3023.593 - 3038.487: 93.8860% ( 152) 00:30:29.670 3038.487 - 3053.382: 94.1862% ( 164) 00:30:29.670 3053.382 - 3068.276: 94.4681% ( 154) 00:30:29.670 3068.276 - 3083.171: 94.7061% ( 130) 00:30:29.670 3083.171 - 3098.065: 94.9770% ( 148) 00:30:29.670 3098.065 - 3112.960: 95.1930% ( 118) 00:30:29.670 3112.960 - 3127.855: 95.3926% ( 109) 00:30:29.670 3127.855 - 3142.749: 95.6141% ( 121) 00:30:29.670 3142.749 - 3157.644: 95.8008% ( 102) 00:30:29.670 3157.644 - 3172.538: 95.9875% ( 102) 00:30:29.670 3172.538 - 3187.433: 96.1742% ( 102) 00:30:29.671 3187.433 - 3202.327: 96.3499% ( 96) 00:30:29.671 3202.327 - 3217.222: 96.4872% ( 75) 00:30:29.671 3217.222 - 3232.116: 96.6410% ( 84) 00:30:29.671 3232.116 - 3247.011: 96.7508% ( 60) 00:30:29.671 3247.011 - 3261.905: 96.8863% ( 74) 00:30:29.671 3261.905 - 3276.800: 97.0071% ( 66) 00:30:29.671 3276.800 - 3291.695: 97.1133% ( 58) 00:30:29.671 3291.695 - 3306.589: 97.2103% ( 53) 00:30:29.671 3306.589 - 3321.484: 97.2981% ( 48) 00:30:29.671 3321.484 - 3336.378: 97.3897% ( 50) 00:30:29.671 3336.378 - 3351.273: 97.4794% ( 49) 00:30:29.671 3351.273 - 3366.167: 97.5709% ( 50) 00:30:29.671 3366.167 - 3381.062: 97.6588% ( 48) 00:30:29.671 3381.062 - 3395.956: 97.7558% ( 53) 00:30:29.671 3395.956 - 3410.851: 97.8345% ( 43) 00:30:29.671 3410.851 - 3425.745: 97.9242% ( 49) 00:30:29.671 3425.745 - 3440.640: 98.0029% ( 43) 00:30:29.671 3440.640 - 3455.535: 98.0560% ( 29) 00:30:29.671 3455.535 - 3470.429: 98.1182% ( 34) 00:30:29.671 3470.429 - 3485.324: 98.1768% ( 32) 00:30:29.671 3485.324 - 3500.218: 98.2299% ( 29) 00:30:29.671 3500.218 - 3515.113: 98.2848% ( 30) 00:30:29.671 3515.113 - 3530.007: 98.3306% ( 25) 00:30:29.671 3530.007 - 3544.902: 98.3727% ( 23) 00:30:29.671 3544.902 - 3559.796: 98.4276% ( 30) 00:30:29.671 3559.796 - 3574.691: 98.4752% ( 26) 00:30:29.671 3574.691 - 3589.585: 98.5173% ( 23) 00:30:29.671 3589.585 - 3604.480: 98.5575% ( 22) 00:30:29.671 3604.480 - 3619.375: 98.6015% ( 24) 00:30:29.671 3619.375 - 3634.269: 98.6399% ( 21) 00:30:29.671 3634.269 - 3649.164: 98.6765% ( 20) 00:30:29.671 3649.164 - 3664.058: 98.7150% ( 21) 00:30:29.671 3664.058 - 3678.953: 98.7461% ( 17) 00:30:29.671 3678.953 - 3693.847: 98.7827% ( 20) 00:30:29.671 3693.847 - 3708.742: 98.8211% ( 21) 00:30:29.671 3708.742 - 3723.636: 98.8559% ( 19) 00:30:29.671 3723.636 - 3738.531: 98.8907% ( 19) 00:30:29.671 3738.531 - 3753.425: 98.9328% ( 23) 00:30:29.671 3753.425 - 3768.320: 98.9767% ( 24) 00:30:29.671 3768.320 - 3783.215: 99.0207% ( 24) 00:30:29.671 3783.215 - 3798.109: 99.0536% ( 18) 00:30:29.671 3798.109 - 3813.004: 99.0884% ( 19) 00:30:29.671 3813.004 - 3842.793: 99.1671% ( 43) 00:30:29.671 3842.793 - 3872.582: 99.2293% ( 34) 00:30:29.671 3872.582 - 3902.371: 99.2788% ( 27) 00:30:29.671 3902.371 - 3932.160: 99.3282% ( 27) 00:30:29.671 3932.160 - 3961.949: 99.3795% ( 28) 00:30:29.671 3961.949 - 3991.738: 99.4270% ( 26) 00:30:29.671 3991.738 - 4021.527: 99.4783% ( 28) 00:30:29.671 4021.527 - 4051.316: 99.5204% ( 23) 00:30:29.671 4051.316 - 4081.105: 99.5698% ( 27) 00:30:29.671 4081.105 - 4110.895: 99.6119% ( 23) 00:30:29.671 4110.895 - 4140.684: 99.6504% ( 21) 00:30:29.671 4140.684 - 4170.473: 99.6906% ( 22) 00:30:29.671 4170.473 - 4200.262: 99.7254% ( 19) 00:30:29.671 4200.262 - 4230.051: 99.7584% ( 18) 00:30:29.671 4230.051 - 4259.840: 99.7877% ( 16) 00:30:29.671 4259.840 - 4289.629: 99.8096% ( 12) 00:30:29.671 4289.629 - 4319.418: 99.8316% ( 12) 00:30:29.671 4319.418 - 4349.207: 99.8462% ( 8) 00:30:29.671 4349.207 - 4378.996: 99.8590% ( 7) 00:30:29.671 4378.996 - 4408.785: 99.8682% ( 5) 00:30:29.671 4408.785 - 4438.575: 99.8755% ( 4) 00:30:29.671 4438.575 - 4468.364: 99.8774% ( 1) 00:30:29.671 4468.364 - 4498.153: 99.8792% ( 1) 00:30:29.671 4527.942 - 4557.731: 99.8810% ( 1) 00:30:29.671 4557.731 - 4587.520: 99.8828% ( 1) 00:30:29.671 4587.520 - 4617.309: 99.8847% ( 1) 00:30:29.671 4647.098 - 4676.887: 99.8865% ( 1) 00:30:29.671 4736.465 - 4766.255: 99.8883% ( 1) 00:30:29.671 4766.255 - 4796.044: 99.8902% ( 1) 00:30:29.671 4796.044 - 4825.833: 99.8920% ( 1) 00:30:29.671 4825.833 - 4855.622: 99.8938% ( 1) 00:30:29.671 4855.622 - 4885.411: 99.8975% ( 2) 00:30:29.671 4885.411 - 4915.200: 99.8993% ( 1) 00:30:29.671 4915.200 - 4944.989: 99.9012% ( 1) 00:30:29.671 4974.778 - 5004.567: 99.9030% ( 1) 00:30:29.671 5004.567 - 5034.356: 99.9048% ( 1) 00:30:29.671 5034.356 - 5064.145: 99.9066% ( 1) 00:30:29.671 5064.145 - 5093.935: 99.9085% ( 1) 00:30:29.671 5093.935 - 5123.724: 99.9103% ( 1) 00:30:29.671 5123.724 - 5153.513: 99.9121% ( 1) 00:30:29.671 5153.513 - 5183.302: 99.9140% ( 1) 00:30:29.671 5183.302 - 5213.091: 99.9176% ( 2) 00:30:29.671 5213.091 - 5242.880: 99.9195% ( 1) 00:30:29.671 5242.880 - 5272.669: 99.9213% ( 1) 00:30:29.671 5272.669 - 5302.458: 99.9231% ( 1) 00:30:29.671 5302.458 - 5332.247: 99.9249% ( 1) 00:30:29.671 5332.247 - 5362.036: 99.9268% ( 1) 00:30:29.671 5362.036 - 5391.825: 99.9286% ( 1) 00:30:29.671 5391.825 - 5421.615: 99.9304% ( 1) 00:30:29.671 5421.615 - 5451.404: 99.9323% ( 1) 00:30:29.671 5451.404 - 5481.193: 99.9341% ( 1) 00:30:29.671 5510.982 - 5540.771: 99.9378% ( 2) 00:30:29.671 5540.771 - 5570.560: 99.9396% ( 1) 00:30:29.671 5600.349 - 5630.138: 99.9414% ( 1) 00:30:29.671 5630.138 - 5659.927: 99.9451% ( 2) 00:30:29.671 5659.927 - 5689.716: 99.9469% ( 1) 00:30:29.671 5689.716 - 5719.505: 99.9487% ( 1) 00:30:29.671 5719.505 - 5749.295: 99.9506% ( 1) 00:30:29.671 5749.295 - 5779.084: 99.9524% ( 1) 00:30:29.671 5779.084 - 5808.873: 99.9542% ( 1) 00:30:29.671 5808.873 - 5838.662: 99.9561% ( 1) 00:30:29.671 5838.662 - 5868.451: 99.9579% ( 1) 00:30:29.671 5868.451 - 5898.240: 99.9597% ( 1) 00:30:29.671 5898.240 - 5928.029: 99.9616% ( 1) 00:30:29.671 5928.029 - 5957.818: 99.9634% ( 1) 00:30:29.671 5957.818 - 5987.607: 99.9671% ( 2) 00:30:29.671 5987.607 - 6017.396: 99.9689% ( 1) 00:30:29.671 6017.396 - 6047.185: 99.9707% ( 1) 00:30:29.671 6047.185 - 6076.975: 99.9744% ( 2) 00:30:29.671 6076.975 - 6106.764: 99.9762% ( 1) 00:30:29.671 6106.764 - 6136.553: 99.9780% ( 1) 00:30:29.671 6136.553 - 6166.342: 99.9799% ( 1) 00:30:29.671 6166.342 - 6196.131: 99.9835% ( 2) 00:30:29.671 6225.920 - 6255.709: 99.9854% ( 1) 00:30:29.671 6285.498 - 6315.287: 99.9890% ( 2) 00:30:29.671 6315.287 - 6345.076: 99.9908% ( 1) 00:30:29.671 6345.076 - 6374.865: 99.9927% ( 1) 00:30:29.671 6374.865 - 6404.655: 99.9945% ( 1) 00:30:29.671 6404.655 - 6434.444: 99.9963% ( 1) 00:30:29.671 6464.233 - 6494.022: 99.9982% ( 1) 00:30:29.671 8817.571 - 8877.149: 100.0000% ( 1) 00:30:29.671 00:30:29.671 ************************************ 00:30:29.671 END TEST nvme_perf 00:30:29.671 ************************************ 00:30:29.671 13:15:48 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:29.671 00:30:29.671 real 0m2.695s 00:30:29.671 user 0m2.251s 00:30:29.671 sys 0m0.289s 00:30:29.671 13:15:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.671 13:15:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.671 13:15:48 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:29.671 13:15:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:30:29.671 13:15:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:29.671 13:15:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.671 ************************************ 00:30:29.671 START TEST nvme_hello_world 00:30:29.671 ************************************ 00:30:29.671 13:15:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:29.928 Initializing NVMe Controllers 00:30:29.928 Attached to 0000:00:06.0 00:30:29.928 Namespace ID: 1 size: 5GB 00:30:29.928 Initialization complete. 00:30:29.928 INFO: using host memory buffer for IO 00:30:29.928 Hello world! 00:30:29.928 ************************************ 00:30:29.928 END TEST nvme_hello_world 00:30:29.928 ************************************ 00:30:29.928 00:30:29.929 real 0m0.299s 00:30:29.929 user 0m0.109s 00:30:29.929 sys 0m0.124s 00:30:29.929 13:15:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.929 13:15:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.929 13:15:48 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:29.929 13:15:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:29.929 13:15:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:29.929 13:15:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.929 ************************************ 00:30:29.929 START TEST nvme_sgl 00:30:29.929 ************************************ 00:30:29.929 13:15:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:30.186 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:30:30.186 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:30:30.186 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:30:30.186 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:30:30.186 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:30:30.186 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:30:30.186 NVMe Readv/Writev Request test 00:30:30.186 Attached to 0000:00:06.0 00:30:30.186 0000:00:06.0: build_io_request_2 test passed 00:30:30.186 0000:00:06.0: build_io_request_4 test passed 00:30:30.186 0000:00:06.0: build_io_request_5 test passed 00:30:30.186 0000:00:06.0: build_io_request_6 test passed 00:30:30.186 0000:00:06.0: build_io_request_7 test passed 00:30:30.186 0000:00:06.0: build_io_request_10 test passed 00:30:30.186 Cleaning up... 00:30:30.444 ************************************ 00:30:30.444 END TEST nvme_sgl 00:30:30.444 ************************************ 00:30:30.444 00:30:30.444 real 0m0.400s 00:30:30.444 user 0m0.201s 00:30:30.444 sys 0m0.127s 00:30:30.444 13:15:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.444 13:15:49 -- common/autotest_common.sh@10 -- # set +x 00:30:30.444 13:15:49 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:30.444 13:15:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:30.444 13:15:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:30.444 13:15:49 -- common/autotest_common.sh@10 -- # set +x 00:30:30.444 ************************************ 00:30:30.444 START TEST nvme_e2edp 00:30:30.444 ************************************ 00:30:30.444 13:15:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:30.701 NVMe Write/Read with End-to-End data protection test 00:30:30.701 Attached to 0000:00:06.0 00:30:30.701 Cleaning up... 00:30:30.701 ************************************ 00:30:30.701 END TEST nvme_e2edp 00:30:30.701 ************************************ 00:30:30.701 00:30:30.701 real 0m0.312s 00:30:30.701 user 0m0.119s 00:30:30.701 sys 0m0.119s 00:30:30.701 13:15:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.701 13:15:49 -- common/autotest_common.sh@10 -- # set +x 00:30:30.701 13:15:49 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:30.701 13:15:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:30.701 13:15:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:30.701 13:15:49 -- common/autotest_common.sh@10 -- # set +x 00:30:30.701 ************************************ 00:30:30.701 START TEST nvme_reserve 00:30:30.701 ************************************ 00:30:30.701 13:15:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:30.959 ===================================================== 00:30:30.959 NVMe Controller at PCI bus 0, device 6, function 0 00:30:30.959 ===================================================== 00:30:30.959 Reservations: Not Supported 00:30:30.959 Reservation test passed 00:30:30.959 ************************************ 00:30:30.959 END TEST nvme_reserve 00:30:30.959 ************************************ 00:30:30.959 00:30:30.959 real 0m0.314s 00:30:30.959 user 0m0.095s 00:30:30.959 sys 0m0.133s 00:30:30.959 13:15:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.959 13:15:49 -- common/autotest_common.sh@10 -- # set +x 00:30:30.959 13:15:49 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:30.959 13:15:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:30.959 13:15:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:30.959 13:15:49 -- common/autotest_common.sh@10 -- # set +x 00:30:31.217 ************************************ 00:30:31.217 START TEST nvme_err_injection 00:30:31.217 ************************************ 00:30:31.217 13:15:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:31.475 NVMe Error Injection test 00:30:31.475 Attached to 0000:00:06.0 00:30:31.475 0000:00:06.0: get features failed as expected 00:30:31.475 0000:00:06.0: get features successfully as expected 00:30:31.475 0000:00:06.0: read failed as expected 00:30:31.475 0000:00:06.0: read successfully as expected 00:30:31.475 Cleaning up... 00:30:31.475 ************************************ 00:30:31.475 END TEST nvme_err_injection 00:30:31.475 ************************************ 00:30:31.475 00:30:31.475 real 0m0.319s 00:30:31.475 user 0m0.113s 00:30:31.475 sys 0m0.112s 00:30:31.475 13:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.475 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:30:31.475 13:15:50 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:31.475 13:15:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:30:31.475 13:15:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:31.475 13:15:50 -- common/autotest_common.sh@10 -- # set +x 00:30:31.475 ************************************ 00:30:31.475 START TEST nvme_overhead 00:30:31.475 ************************************ 00:30:31.475 13:15:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:32.851 Initializing NVMe Controllers 00:30:32.851 Attached to 0000:00:06.0 00:30:32.851 Initialization complete. Launching workers. 00:30:32.851 submit (in ns) avg, min, max = 16580.3, 11255.5, 177058.2 00:30:32.851 complete (in ns) avg, min, max = 11456.5, 7597.3, 90891.4 00:30:32.851 00:30:32.851 Submit histogram 00:30:32.851 ================ 00:30:32.851 Range in us Cumulative Count 00:30:32.851 11.229 - 11.287: 0.0120% ( 1) 00:30:32.851 11.287 - 11.345: 0.0480% ( 3) 00:30:32.851 11.345 - 11.404: 0.0600% ( 1) 00:30:32.851 11.404 - 11.462: 0.0840% ( 2) 00:30:32.851 11.462 - 11.520: 0.1200% ( 3) 00:30:32.851 11.520 - 11.578: 0.1800% ( 5) 00:30:32.851 11.578 - 11.636: 0.2640% ( 7) 00:30:32.851 11.636 - 11.695: 0.3240% ( 5) 00:30:32.851 11.695 - 11.753: 0.4320% ( 9) 00:30:32.851 11.753 - 11.811: 0.5880% ( 13) 00:30:32.851 11.811 - 11.869: 0.8159% ( 19) 00:30:32.851 11.869 - 11.927: 0.9719% ( 13) 00:30:32.851 11.927 - 11.985: 1.1519% ( 15) 00:30:32.851 11.985 - 12.044: 1.4399% ( 24) 00:30:32.851 12.044 - 12.102: 1.8838% ( 37) 00:30:32.851 12.102 - 12.160: 2.1958% ( 26) 00:30:32.851 12.160 - 12.218: 2.5198% ( 27) 00:30:32.851 12.218 - 12.276: 2.8078% ( 24) 00:30:32.851 12.276 - 12.335: 2.9518% ( 12) 00:30:32.851 12.335 - 12.393: 3.2277% ( 23) 00:30:32.851 12.393 - 12.451: 3.6477% ( 35) 00:30:32.851 12.451 - 12.509: 3.9237% ( 23) 00:30:32.851 12.509 - 12.567: 4.1877% ( 22) 00:30:32.852 12.567 - 12.625: 4.5356% ( 29) 00:30:32.852 12.625 - 12.684: 5.0996% ( 47) 00:30:32.852 12.684 - 12.742: 5.6995% ( 50) 00:30:32.852 12.742 - 12.800: 6.3115% ( 51) 00:30:32.852 12.800 - 12.858: 6.8395% ( 44) 00:30:32.852 12.858 - 12.916: 7.8714% ( 86) 00:30:32.852 12.916 - 12.975: 9.0833% ( 101) 00:30:32.852 12.975 - 13.033: 10.9551% ( 156) 00:30:32.852 13.033 - 13.091: 12.5030% ( 129) 00:30:32.852 13.091 - 13.149: 13.9789% ( 123) 00:30:32.852 13.149 - 13.207: 15.8507% ( 156) 00:30:32.852 13.207 - 13.265: 18.0586% ( 184) 00:30:32.852 13.265 - 13.324: 20.6743% ( 218) 00:30:32.852 13.324 - 13.382: 22.9422% ( 189) 00:30:32.852 13.382 - 13.440: 25.0300% ( 174) 00:30:32.852 13.440 - 13.498: 26.5419% ( 126) 00:30:32.852 13.498 - 13.556: 28.3657% ( 152) 00:30:32.852 13.556 - 13.615: 30.3216% ( 163) 00:30:32.852 13.615 - 13.673: 32.4574% ( 178) 00:30:32.852 13.673 - 13.731: 34.4612% ( 167) 00:30:32.852 13.731 - 13.789: 35.7211% ( 105) 00:30:32.852 13.789 - 13.847: 37.0410% ( 110) 00:30:32.852 13.847 - 13.905: 38.3369% ( 108) 00:30:32.852 13.905 - 13.964: 39.6568% ( 110) 00:30:32.852 13.964 - 14.022: 41.0967% ( 120) 00:30:32.852 14.022 - 14.080: 42.3686% ( 106) 00:30:32.852 14.080 - 14.138: 43.2805% ( 76) 00:30:32.852 14.138 - 14.196: 44.4204% ( 95) 00:30:32.852 14.196 - 14.255: 45.9683% ( 129) 00:30:32.852 14.255 - 14.313: 48.2721% ( 192) 00:30:32.852 14.313 - 14.371: 50.9959% ( 227) 00:30:32.852 14.371 - 14.429: 53.7917% ( 233) 00:30:32.852 14.429 - 14.487: 56.3475% ( 213) 00:30:32.852 14.487 - 14.545: 58.2433% ( 158) 00:30:32.852 14.545 - 14.604: 59.6112% ( 114) 00:30:32.852 14.604 - 14.662: 60.8111% ( 100) 00:30:32.852 14.662 - 14.720: 61.8431% ( 86) 00:30:32.852 14.720 - 14.778: 62.7670% ( 77) 00:30:32.852 14.778 - 14.836: 63.3909% ( 52) 00:30:32.852 14.836 - 14.895: 63.9669% ( 48) 00:30:32.852 14.895 - 15.011: 64.8068% ( 70) 00:30:32.852 15.011 - 15.127: 65.9707% ( 97) 00:30:32.852 15.127 - 15.244: 67.0986% ( 94) 00:30:32.852 15.244 - 15.360: 68.2145% ( 93) 00:30:32.852 15.360 - 15.476: 69.0665% ( 71) 00:30:32.852 15.476 - 15.593: 70.1944% ( 94) 00:30:32.852 15.593 - 15.709: 71.2263% ( 86) 00:30:32.852 15.709 - 15.825: 72.6542% ( 119) 00:30:32.852 15.825 - 15.942: 73.7461% ( 91) 00:30:32.852 15.942 - 16.058: 74.6820% ( 78) 00:30:32.852 16.058 - 16.175: 75.4140% ( 61) 00:30:32.852 16.175 - 16.291: 76.0739% ( 55) 00:30:32.852 16.291 - 16.407: 76.6379% ( 47) 00:30:32.852 16.407 - 16.524: 76.9978% ( 30) 00:30:32.852 16.524 - 16.640: 77.3818% ( 32) 00:30:32.852 16.640 - 16.756: 77.6818% ( 25) 00:30:32.852 16.756 - 16.873: 77.9578% ( 23) 00:30:32.852 16.873 - 16.989: 78.1857% ( 19) 00:30:32.852 16.989 - 17.105: 78.3657% ( 15) 00:30:32.852 17.105 - 17.222: 78.7257% ( 30) 00:30:32.852 17.222 - 17.338: 78.9897% ( 22) 00:30:32.852 17.338 - 17.455: 79.2537% ( 22) 00:30:32.852 17.455 - 17.571: 79.3976% ( 12) 00:30:32.852 17.571 - 17.687: 79.6016% ( 17) 00:30:32.852 17.687 - 17.804: 79.7696% ( 14) 00:30:32.852 17.804 - 17.920: 80.0216% ( 21) 00:30:32.852 17.920 - 18.036: 80.3096% ( 24) 00:30:32.852 18.036 - 18.153: 80.4776% ( 14) 00:30:32.852 18.153 - 18.269: 80.6935% ( 18) 00:30:32.852 18.269 - 18.385: 80.9695% ( 23) 00:30:32.852 18.385 - 18.502: 81.1735% ( 17) 00:30:32.852 18.502 - 18.618: 81.4255% ( 21) 00:30:32.852 18.618 - 18.735: 81.5935% ( 14) 00:30:32.852 18.735 - 18.851: 81.7375% ( 12) 00:30:32.852 18.851 - 18.967: 82.0014% ( 22) 00:30:32.852 18.967 - 19.084: 82.1334% ( 11) 00:30:32.852 19.084 - 19.200: 82.3014% ( 14) 00:30:32.852 19.200 - 19.316: 82.5654% ( 22) 00:30:32.852 19.316 - 19.433: 82.7934% ( 19) 00:30:32.852 19.433 - 19.549: 82.9494% ( 13) 00:30:32.852 19.549 - 19.665: 83.1893% ( 20) 00:30:32.852 19.665 - 19.782: 83.4893% ( 25) 00:30:32.852 19.782 - 19.898: 83.7173% ( 19) 00:30:32.852 19.898 - 20.015: 83.8613% ( 12) 00:30:32.852 20.015 - 20.131: 83.9813% ( 10) 00:30:32.852 20.131 - 20.247: 84.1973% ( 18) 00:30:32.852 20.247 - 20.364: 84.4612% ( 22) 00:30:32.852 20.364 - 20.480: 84.6892% ( 19) 00:30:32.852 20.480 - 20.596: 84.9532% ( 22) 00:30:32.852 20.596 - 20.713: 85.1452% ( 16) 00:30:32.852 20.713 - 20.829: 85.3972% ( 21) 00:30:32.852 20.829 - 20.945: 85.6012% ( 17) 00:30:32.852 20.945 - 21.062: 85.7691% ( 14) 00:30:32.852 21.062 - 21.178: 85.9971% ( 19) 00:30:32.852 21.178 - 21.295: 86.1891% ( 16) 00:30:32.852 21.295 - 21.411: 86.3931% ( 17) 00:30:32.852 21.411 - 21.527: 86.5371% ( 12) 00:30:32.852 21.527 - 21.644: 86.7171% ( 15) 00:30:32.852 21.644 - 21.760: 86.8850% ( 14) 00:30:32.852 21.760 - 21.876: 87.0530% ( 14) 00:30:32.852 21.876 - 21.993: 87.2090% ( 13) 00:30:32.852 21.993 - 22.109: 87.3050% ( 8) 00:30:32.852 22.109 - 22.225: 87.4370% ( 11) 00:30:32.852 22.225 - 22.342: 87.5810% ( 12) 00:30:32.852 22.342 - 22.458: 87.7610% ( 15) 00:30:32.852 22.458 - 22.575: 87.8810% ( 10) 00:30:32.852 22.575 - 22.691: 88.0730% ( 16) 00:30:32.852 22.691 - 22.807: 88.2049% ( 11) 00:30:32.852 22.807 - 22.924: 88.2889% ( 7) 00:30:32.852 22.924 - 23.040: 88.4449% ( 13) 00:30:32.852 23.040 - 23.156: 88.4929% ( 4) 00:30:32.852 23.156 - 23.273: 88.6129% ( 10) 00:30:32.852 23.273 - 23.389: 88.6849% ( 6) 00:30:32.852 23.389 - 23.505: 88.7689% ( 7) 00:30:32.852 23.505 - 23.622: 88.8409% ( 6) 00:30:32.852 23.622 - 23.738: 88.9009% ( 5) 00:30:32.852 23.738 - 23.855: 88.9849% ( 7) 00:30:32.852 23.855 - 23.971: 89.0329% ( 4) 00:30:32.852 23.971 - 24.087: 89.1049% ( 6) 00:30:32.852 24.087 - 24.204: 89.1649% ( 5) 00:30:32.852 24.204 - 24.320: 89.2609% ( 8) 00:30:32.852 24.436 - 24.553: 89.2849% ( 2) 00:30:32.852 24.553 - 24.669: 89.3209% ( 3) 00:30:32.852 24.669 - 24.785: 89.3689% ( 4) 00:30:32.852 24.785 - 24.902: 89.3928% ( 2) 00:30:32.852 24.902 - 25.018: 89.4408% ( 4) 00:30:32.852 25.018 - 25.135: 89.4888% ( 4) 00:30:32.852 25.135 - 25.251: 89.5008% ( 1) 00:30:32.852 25.367 - 25.484: 89.5368% ( 3) 00:30:32.852 25.484 - 25.600: 89.5728% ( 3) 00:30:32.852 25.600 - 25.716: 89.5848% ( 1) 00:30:32.852 25.716 - 25.833: 89.6208% ( 3) 00:30:32.852 25.833 - 25.949: 89.6448% ( 2) 00:30:32.852 25.949 - 26.065: 89.6808% ( 3) 00:30:32.852 26.065 - 26.182: 89.7048% ( 2) 00:30:32.852 26.182 - 26.298: 89.7528% ( 4) 00:30:32.852 26.298 - 26.415: 89.7768% ( 2) 00:30:32.852 26.415 - 26.531: 89.8488% ( 6) 00:30:32.852 26.531 - 26.647: 89.8608% ( 1) 00:30:32.852 26.647 - 26.764: 89.8968% ( 3) 00:30:32.852 26.764 - 26.880: 89.9688% ( 6) 00:30:32.852 26.880 - 26.996: 90.0048% ( 3) 00:30:32.852 26.996 - 27.113: 90.0768% ( 6) 00:30:32.852 27.113 - 27.229: 90.1248% ( 4) 00:30:32.852 27.229 - 27.345: 90.1968% ( 6) 00:30:32.852 27.345 - 27.462: 90.2568% ( 5) 00:30:32.852 27.462 - 27.578: 90.3888% ( 11) 00:30:32.852 27.578 - 27.695: 90.5688% ( 15) 00:30:32.852 27.695 - 27.811: 90.7487% ( 15) 00:30:32.852 27.811 - 27.927: 91.0727% ( 27) 00:30:32.852 27.927 - 28.044: 91.4447% ( 31) 00:30:32.852 28.044 - 28.160: 91.6847% ( 20) 00:30:32.852 28.160 - 28.276: 92.0326% ( 29) 00:30:32.852 28.276 - 28.393: 92.3566% ( 27) 00:30:32.852 28.393 - 28.509: 92.8606% ( 42) 00:30:32.852 28.509 - 28.625: 93.3165% ( 38) 00:30:32.852 28.625 - 28.742: 93.7845% ( 39) 00:30:32.852 28.742 - 28.858: 94.2285% ( 37) 00:30:32.852 28.858 - 28.975: 94.6004% ( 31) 00:30:32.852 28.975 - 29.091: 95.0564% ( 38) 00:30:32.852 29.091 - 29.207: 95.3324% ( 23) 00:30:32.852 29.207 - 29.324: 95.5724% ( 20) 00:30:32.852 29.324 - 29.440: 95.8723% ( 25) 00:30:32.852 29.440 - 29.556: 96.0283% ( 13) 00:30:32.852 29.556 - 29.673: 96.1603% ( 11) 00:30:32.852 29.673 - 29.789: 96.2683% ( 9) 00:30:32.852 29.789 - 30.022: 96.5443% ( 23) 00:30:32.852 30.022 - 30.255: 96.7483% ( 17) 00:30:32.852 30.255 - 30.487: 96.9522% ( 17) 00:30:32.852 30.487 - 30.720: 97.0962% ( 12) 00:30:32.852 30.720 - 30.953: 97.2282% ( 11) 00:30:32.852 30.953 - 31.185: 97.3722% ( 12) 00:30:32.852 31.185 - 31.418: 97.4802% ( 9) 00:30:32.852 31.418 - 31.651: 97.6122% ( 11) 00:30:32.852 31.651 - 31.884: 97.6482% ( 3) 00:30:32.852 31.884 - 32.116: 97.7322% ( 7) 00:30:32.852 32.116 - 32.349: 97.7562% ( 2) 00:30:32.852 32.349 - 32.582: 97.8042% ( 4) 00:30:32.852 32.582 - 32.815: 97.8642% ( 5) 00:30:32.852 32.815 - 33.047: 97.9122% ( 4) 00:30:32.852 33.047 - 33.280: 97.9722% ( 5) 00:30:32.852 33.280 - 33.513: 97.9962% ( 2) 00:30:32.852 33.513 - 33.745: 98.0202% ( 2) 00:30:32.852 33.745 - 33.978: 98.0802% ( 5) 00:30:32.852 33.978 - 34.211: 98.1042% ( 2) 00:30:32.852 34.211 - 34.444: 98.1881% ( 7) 00:30:32.852 34.444 - 34.676: 98.2361% ( 4) 00:30:32.853 34.676 - 34.909: 98.2841% ( 4) 00:30:32.853 34.909 - 35.142: 98.3321% ( 4) 00:30:32.853 35.142 - 35.375: 98.3681% ( 3) 00:30:32.853 35.375 - 35.607: 98.4401% ( 6) 00:30:32.853 35.607 - 35.840: 98.5241% ( 7) 00:30:32.853 35.840 - 36.073: 98.5841% ( 5) 00:30:32.853 36.073 - 36.305: 98.6561% ( 6) 00:30:32.853 36.305 - 36.538: 98.7161% ( 5) 00:30:32.853 36.538 - 36.771: 98.7761% ( 5) 00:30:32.853 36.771 - 37.004: 98.8601% ( 7) 00:30:32.853 37.004 - 37.236: 98.8841% ( 2) 00:30:32.853 37.236 - 37.469: 98.9201% ( 3) 00:30:32.853 37.702 - 37.935: 98.9801% ( 5) 00:30:32.853 38.167 - 38.400: 98.9921% ( 1) 00:30:32.853 38.400 - 38.633: 99.0161% ( 2) 00:30:32.853 38.633 - 38.865: 99.0401% ( 2) 00:30:32.853 38.865 - 39.098: 99.0521% ( 1) 00:30:32.853 39.098 - 39.331: 99.1001% ( 4) 00:30:32.853 39.331 - 39.564: 99.1121% ( 1) 00:30:32.853 39.564 - 39.796: 99.1481% ( 3) 00:30:32.853 40.495 - 40.727: 99.1841% ( 3) 00:30:32.853 41.891 - 42.124: 99.1961% ( 1) 00:30:32.853 42.124 - 42.356: 99.2081% ( 1) 00:30:32.853 42.589 - 42.822: 99.2561% ( 4) 00:30:32.853 43.055 - 43.287: 99.2921% ( 3) 00:30:32.853 43.287 - 43.520: 99.3161% ( 2) 00:30:32.853 43.520 - 43.753: 99.3521% ( 3) 00:30:32.853 43.753 - 43.985: 99.3880% ( 3) 00:30:32.853 43.985 - 44.218: 99.4000% ( 1) 00:30:32.853 44.218 - 44.451: 99.4360% ( 3) 00:30:32.853 44.451 - 44.684: 99.4480% ( 1) 00:30:32.853 44.684 - 44.916: 99.4600% ( 1) 00:30:32.853 44.916 - 45.149: 99.4720% ( 1) 00:30:32.853 45.382 - 45.615: 99.4960% ( 2) 00:30:32.853 45.615 - 45.847: 99.5080% ( 1) 00:30:32.853 45.847 - 46.080: 99.5200% ( 1) 00:30:32.853 46.080 - 46.313: 99.5320% ( 1) 00:30:32.853 46.313 - 46.545: 99.5560% ( 2) 00:30:32.853 46.545 - 46.778: 99.5800% ( 2) 00:30:32.853 47.011 - 47.244: 99.5920% ( 1) 00:30:32.853 47.709 - 47.942: 99.6160% ( 2) 00:30:32.853 47.942 - 48.175: 99.6280% ( 1) 00:30:32.853 48.175 - 48.407: 99.6400% ( 1) 00:30:32.853 48.873 - 49.105: 99.6520% ( 1) 00:30:32.853 49.338 - 49.571: 99.6760% ( 2) 00:30:32.853 49.571 - 49.804: 99.6880% ( 1) 00:30:32.853 49.804 - 50.036: 99.7000% ( 1) 00:30:32.853 50.967 - 51.200: 99.7120% ( 1) 00:30:32.853 51.433 - 51.665: 99.7240% ( 1) 00:30:32.853 52.131 - 52.364: 99.7480% ( 2) 00:30:32.853 52.596 - 52.829: 99.7600% ( 1) 00:30:32.853 52.829 - 53.062: 99.7720% ( 1) 00:30:32.853 54.225 - 54.458: 99.7840% ( 1) 00:30:32.853 54.691 - 54.924: 99.7960% ( 1) 00:30:32.853 56.785 - 57.018: 99.8080% ( 1) 00:30:32.853 57.018 - 57.251: 99.8200% ( 1) 00:30:32.853 58.415 - 58.647: 99.8320% ( 1) 00:30:32.853 58.647 - 58.880: 99.8440% ( 1) 00:30:32.853 59.578 - 60.044: 99.8560% ( 1) 00:30:32.853 60.509 - 60.975: 99.8680% ( 1) 00:30:32.853 62.836 - 63.302: 99.8800% ( 1) 00:30:32.853 63.302 - 63.767: 99.9040% ( 2) 00:30:32.853 63.767 - 64.233: 99.9160% ( 1) 00:30:32.853 65.164 - 65.629: 99.9280% ( 1) 00:30:32.853 66.095 - 66.560: 99.9400% ( 1) 00:30:32.853 73.076 - 73.542: 99.9520% ( 1) 00:30:32.853 82.385 - 82.851: 99.9640% ( 1) 00:30:32.853 88.902 - 89.367: 99.9760% ( 1) 00:30:32.853 94.953 - 95.418: 99.9880% ( 1) 00:30:32.853 176.873 - 177.804: 100.0000% ( 1) 00:30:32.853 00:30:32.853 Complete histogram 00:30:32.853 ================== 00:30:32.853 Range in us Cumulative Count 00:30:32.853 7.564 - 7.622: 0.0120% ( 1) 00:30:32.853 7.622 - 7.680: 0.1680% ( 13) 00:30:32.853 7.680 - 7.738: 0.5160% ( 29) 00:30:32.853 7.738 - 7.796: 1.1279% ( 51) 00:30:32.853 7.796 - 7.855: 1.5359% ( 34) 00:30:32.853 7.855 - 7.913: 2.2918% ( 63) 00:30:32.853 7.913 - 7.971: 3.7077% ( 118) 00:30:32.853 7.971 - 8.029: 5.1836% ( 123) 00:30:32.853 8.029 - 8.087: 6.2995% ( 93) 00:30:32.853 8.087 - 8.145: 8.1234% ( 152) 00:30:32.853 8.145 - 8.204: 10.1872% ( 172) 00:30:32.853 8.204 - 8.262: 12.5270% ( 195) 00:30:32.853 8.262 - 8.320: 14.8788% ( 196) 00:30:32.853 8.320 - 8.378: 16.7987% ( 160) 00:30:32.853 8.378 - 8.436: 18.9825% ( 182) 00:30:32.853 8.436 - 8.495: 21.7423% ( 230) 00:30:32.853 8.495 - 8.553: 24.8740% ( 261) 00:30:32.853 8.553 - 8.611: 27.0098% ( 178) 00:30:32.853 8.611 - 8.669: 28.9657% ( 163) 00:30:32.853 8.669 - 8.727: 31.3175% ( 196) 00:30:32.853 8.727 - 8.785: 34.2453% ( 244) 00:30:32.853 8.785 - 8.844: 37.5690% ( 277) 00:30:32.853 8.844 - 8.902: 42.2246% ( 388) 00:30:32.853 8.902 - 8.960: 47.3002% ( 423) 00:30:32.853 8.960 - 9.018: 51.4759% ( 348) 00:30:32.853 9.018 - 9.076: 54.6796% ( 267) 00:30:32.853 9.076 - 9.135: 56.7555% ( 173) 00:30:32.853 9.135 - 9.193: 57.9194% ( 97) 00:30:32.853 9.193 - 9.251: 58.8553% ( 78) 00:30:32.853 9.251 - 9.309: 59.8872% ( 86) 00:30:32.853 9.309 - 9.367: 61.1351% ( 104) 00:30:32.853 9.367 - 9.425: 62.0470% ( 76) 00:30:32.853 9.425 - 9.484: 62.4550% ( 34) 00:30:32.853 9.484 - 9.542: 62.8270% ( 31) 00:30:32.853 9.542 - 9.600: 63.1150% ( 24) 00:30:32.853 9.600 - 9.658: 63.3549% ( 20) 00:30:32.853 9.658 - 9.716: 63.5589% ( 17) 00:30:32.853 9.716 - 9.775: 63.6429% ( 7) 00:30:32.853 9.775 - 9.833: 63.8589% ( 18) 00:30:32.853 9.833 - 9.891: 64.0149% ( 13) 00:30:32.853 9.891 - 9.949: 64.1589% ( 12) 00:30:32.853 9.949 - 10.007: 64.2549% ( 8) 00:30:32.853 10.007 - 10.065: 64.3749% ( 10) 00:30:32.853 10.065 - 10.124: 64.5308% ( 13) 00:30:32.853 10.124 - 10.182: 64.6508% ( 10) 00:30:32.853 10.182 - 10.240: 64.8548% ( 17) 00:30:32.853 10.240 - 10.298: 65.0348% ( 15) 00:30:32.853 10.298 - 10.356: 65.2148% ( 15) 00:30:32.853 10.356 - 10.415: 65.5388% ( 27) 00:30:32.853 10.415 - 10.473: 65.6827% ( 12) 00:30:32.853 10.473 - 10.531: 65.8387% ( 13) 00:30:32.853 10.531 - 10.589: 66.0907% ( 21) 00:30:32.853 10.589 - 10.647: 66.3427% ( 21) 00:30:32.853 10.647 - 10.705: 66.5467% ( 17) 00:30:32.853 10.705 - 10.764: 66.8467% ( 25) 00:30:32.853 10.764 - 10.822: 67.1346% ( 24) 00:30:32.853 10.822 - 10.880: 67.3626% ( 19) 00:30:32.853 10.880 - 10.938: 67.6026% ( 20) 00:30:32.853 10.938 - 10.996: 67.9386% ( 28) 00:30:32.853 10.996 - 11.055: 68.0586% ( 10) 00:30:32.853 11.055 - 11.113: 68.3105% ( 21) 00:30:32.853 11.113 - 11.171: 68.6585% ( 29) 00:30:32.853 11.171 - 11.229: 68.9585% ( 25) 00:30:32.853 11.229 - 11.287: 69.3665% ( 34) 00:30:32.853 11.287 - 11.345: 69.9064% ( 45) 00:30:32.853 11.345 - 11.404: 70.5784% ( 56) 00:30:32.853 11.404 - 11.462: 71.2863% ( 59) 00:30:32.853 11.462 - 11.520: 71.7903% ( 42) 00:30:32.853 11.520 - 11.578: 72.2942% ( 42) 00:30:32.853 11.578 - 11.636: 72.7262% ( 36) 00:30:32.853 11.636 - 11.695: 73.0982% ( 31) 00:30:32.853 11.695 - 11.753: 73.4341% ( 28) 00:30:32.853 11.753 - 11.811: 73.8541% ( 35) 00:30:32.853 11.811 - 11.869: 74.2381% ( 32) 00:30:32.853 11.869 - 11.927: 74.7900% ( 46) 00:30:32.853 11.927 - 11.985: 75.2700% ( 40) 00:30:32.853 11.985 - 12.044: 75.7499% ( 40) 00:30:32.853 12.044 - 12.102: 76.1819% ( 36) 00:30:32.853 12.102 - 12.160: 76.6139% ( 36) 00:30:32.853 12.160 - 12.218: 76.9258% ( 26) 00:30:32.853 12.218 - 12.276: 77.2018% ( 23) 00:30:32.853 12.276 - 12.335: 77.4178% ( 18) 00:30:32.853 12.335 - 12.393: 77.6218% ( 17) 00:30:32.853 12.393 - 12.451: 77.9218% ( 25) 00:30:32.853 12.451 - 12.509: 78.1377% ( 18) 00:30:32.853 12.509 - 12.567: 78.4017% ( 22) 00:30:32.853 12.567 - 12.625: 78.5937% ( 16) 00:30:32.853 12.625 - 12.684: 78.8817% ( 24) 00:30:32.853 12.684 - 12.742: 79.1097% ( 19) 00:30:32.853 12.742 - 12.800: 79.3617% ( 21) 00:30:32.853 12.800 - 12.858: 79.6256% ( 22) 00:30:32.853 12.858 - 12.916: 79.8296% ( 17) 00:30:32.853 12.916 - 12.975: 79.9136% ( 7) 00:30:32.853 12.975 - 13.033: 80.1056% ( 16) 00:30:32.853 13.033 - 13.091: 80.3216% ( 18) 00:30:32.853 13.091 - 13.149: 80.5376% ( 18) 00:30:32.853 13.149 - 13.207: 80.7175% ( 15) 00:30:32.853 13.207 - 13.265: 80.8735% ( 13) 00:30:32.853 13.265 - 13.324: 81.1615% ( 24) 00:30:32.853 13.324 - 13.382: 81.4015% ( 20) 00:30:32.853 13.382 - 13.440: 81.6175% ( 18) 00:30:32.853 13.440 - 13.498: 81.7615% ( 12) 00:30:32.853 13.498 - 13.556: 81.8814% ( 10) 00:30:32.853 13.556 - 13.615: 82.0854% ( 17) 00:30:32.853 13.615 - 13.673: 82.2654% ( 15) 00:30:32.853 13.673 - 13.731: 82.4694% ( 17) 00:30:32.853 13.731 - 13.789: 82.6854% ( 18) 00:30:32.853 13.789 - 13.847: 82.9134% ( 19) 00:30:32.853 13.847 - 13.905: 83.0694% ( 13) 00:30:32.853 13.905 - 13.964: 83.2133% ( 12) 00:30:32.853 13.964 - 14.022: 83.3933% ( 15) 00:30:32.853 14.022 - 14.080: 83.5973% ( 17) 00:30:32.853 14.080 - 14.138: 83.8253% ( 19) 00:30:32.853 14.138 - 14.196: 84.0293% ( 17) 00:30:32.853 14.196 - 14.255: 84.2933% ( 22) 00:30:32.853 14.255 - 14.313: 84.4972% ( 17) 00:30:32.853 14.313 - 14.371: 84.6772% ( 15) 00:30:32.854 14.371 - 14.429: 84.7972% ( 10) 00:30:32.854 14.429 - 14.487: 84.9652% ( 14) 00:30:32.854 14.487 - 14.545: 85.1332% ( 14) 00:30:32.854 14.545 - 14.604: 85.2892% ( 13) 00:30:32.854 14.604 - 14.662: 85.4452% ( 13) 00:30:32.854 14.662 - 14.720: 85.6132% ( 14) 00:30:32.854 14.720 - 14.778: 85.7091% ( 8) 00:30:32.854 14.778 - 14.836: 85.8531% ( 12) 00:30:32.854 14.836 - 14.895: 85.9251% ( 6) 00:30:32.854 14.895 - 15.011: 86.1051% ( 15) 00:30:32.854 15.011 - 15.127: 86.3211% ( 18) 00:30:32.854 15.127 - 15.244: 86.5731% ( 21) 00:30:32.854 15.244 - 15.360: 86.8850% ( 26) 00:30:32.854 15.360 - 15.476: 87.1370% ( 21) 00:30:32.854 15.476 - 15.593: 87.3890% ( 21) 00:30:32.854 15.593 - 15.709: 87.5690% ( 15) 00:30:32.854 15.709 - 15.825: 87.7850% ( 18) 00:30:32.854 15.825 - 15.942: 87.9050% ( 10) 00:30:32.854 15.942 - 16.058: 88.0970% ( 16) 00:30:32.854 16.058 - 16.175: 88.2169% ( 10) 00:30:32.854 16.175 - 16.291: 88.3129% ( 8) 00:30:32.854 16.291 - 16.407: 88.4809% ( 14) 00:30:32.854 16.407 - 16.524: 88.6249% ( 12) 00:30:32.854 16.524 - 16.640: 88.7329% ( 9) 00:30:32.854 16.640 - 16.756: 88.8049% ( 6) 00:30:32.854 16.756 - 16.873: 88.9369% ( 11) 00:30:32.854 16.873 - 16.989: 89.0929% ( 13) 00:30:32.854 16.989 - 17.105: 89.2489% ( 13) 00:30:32.854 17.105 - 17.222: 89.3329% ( 7) 00:30:32.854 17.222 - 17.338: 89.4888% ( 13) 00:30:32.854 17.338 - 17.455: 89.5488% ( 5) 00:30:32.854 17.455 - 17.571: 89.7048% ( 13) 00:30:32.854 17.571 - 17.687: 89.7648% ( 5) 00:30:32.854 17.687 - 17.804: 89.8248% ( 5) 00:30:32.854 17.804 - 17.920: 89.8968% ( 6) 00:30:32.854 17.920 - 18.036: 90.0168% ( 10) 00:30:32.854 18.153 - 18.269: 90.1008% ( 7) 00:30:32.854 18.269 - 18.385: 90.1728% ( 6) 00:30:32.854 18.385 - 18.502: 90.2088% ( 3) 00:30:32.854 18.502 - 18.618: 90.2688% ( 5) 00:30:32.854 18.618 - 18.735: 90.3048% ( 3) 00:30:32.854 18.735 - 18.851: 90.3168% ( 1) 00:30:32.854 18.851 - 18.967: 90.3528% ( 3) 00:30:32.854 18.967 - 19.084: 90.3768% ( 2) 00:30:32.854 19.084 - 19.200: 90.4128% ( 3) 00:30:32.854 19.200 - 19.316: 90.4248% ( 1) 00:30:32.854 19.316 - 19.433: 90.4488% ( 2) 00:30:32.854 19.433 - 19.549: 90.4728% ( 2) 00:30:32.854 19.549 - 19.665: 90.4848% ( 1) 00:30:32.854 19.665 - 19.782: 90.5208% ( 3) 00:30:32.854 19.782 - 19.898: 90.5688% ( 4) 00:30:32.854 19.898 - 20.015: 90.6048% ( 3) 00:30:32.854 20.015 - 20.131: 90.6168% ( 1) 00:30:32.854 20.131 - 20.247: 90.6647% ( 4) 00:30:32.854 20.247 - 20.364: 90.6767% ( 1) 00:30:32.854 20.364 - 20.480: 90.7007% ( 2) 00:30:32.854 20.480 - 20.596: 90.7247% ( 2) 00:30:32.854 20.596 - 20.713: 90.7607% ( 3) 00:30:32.854 20.945 - 21.062: 90.7727% ( 1) 00:30:32.854 21.295 - 21.411: 90.8087% ( 3) 00:30:32.854 21.411 - 21.527: 90.8327% ( 2) 00:30:32.854 21.527 - 21.644: 90.8447% ( 1) 00:30:32.854 21.644 - 21.760: 90.8567% ( 1) 00:30:32.854 21.760 - 21.876: 90.8927% ( 3) 00:30:32.854 21.876 - 21.993: 90.9167% ( 2) 00:30:32.854 21.993 - 22.109: 90.9287% ( 1) 00:30:32.854 22.109 - 22.225: 90.9527% ( 2) 00:30:32.854 22.225 - 22.342: 90.9647% ( 1) 00:30:32.854 22.342 - 22.458: 91.0847% ( 10) 00:30:32.854 22.458 - 22.575: 91.1567% ( 6) 00:30:32.854 22.575 - 22.691: 91.2647% ( 9) 00:30:32.854 22.691 - 22.807: 91.4687% ( 17) 00:30:32.854 22.807 - 22.924: 91.6487% ( 15) 00:30:32.854 22.924 - 23.040: 91.9006% ( 21) 00:30:32.854 23.040 - 23.156: 92.2006% ( 25) 00:30:32.854 23.156 - 23.273: 92.4886% ( 24) 00:30:32.854 23.273 - 23.389: 92.9206% ( 36) 00:30:32.854 23.389 - 23.505: 93.4365% ( 43) 00:30:32.854 23.505 - 23.622: 93.8325% ( 33) 00:30:32.854 23.622 - 23.738: 94.3005% ( 39) 00:30:32.854 23.738 - 23.855: 94.6724% ( 31) 00:30:32.854 23.855 - 23.971: 95.0084% ( 28) 00:30:32.854 23.971 - 24.087: 95.2364% ( 19) 00:30:32.854 24.087 - 24.204: 95.3804% ( 12) 00:30:32.854 24.204 - 24.320: 95.5844% ( 17) 00:30:32.854 24.320 - 24.436: 95.7043% ( 10) 00:30:32.854 24.436 - 24.553: 95.8363% ( 11) 00:30:32.854 24.553 - 24.669: 95.9203% ( 7) 00:30:32.854 24.669 - 24.785: 95.9803% ( 5) 00:30:32.854 24.785 - 24.902: 96.0283% ( 4) 00:30:32.854 24.902 - 25.018: 96.1123% ( 7) 00:30:32.854 25.018 - 25.135: 96.1843% ( 6) 00:30:32.854 25.135 - 25.251: 96.2203% ( 3) 00:30:32.854 25.251 - 25.367: 96.2803% ( 5) 00:30:32.854 25.367 - 25.484: 96.3763% ( 8) 00:30:32.854 25.484 - 25.600: 96.4603% ( 7) 00:30:32.854 25.600 - 25.716: 96.5203% ( 5) 00:30:32.854 25.716 - 25.833: 96.5923% ( 6) 00:30:32.854 25.833 - 25.949: 96.6763% ( 7) 00:30:32.854 25.949 - 26.065: 96.7123% ( 3) 00:30:32.854 26.065 - 26.182: 96.8443% ( 11) 00:30:32.854 26.182 - 26.298: 96.9162% ( 6) 00:30:32.854 26.298 - 26.415: 97.0242% ( 9) 00:30:32.854 26.415 - 26.531: 97.1322% ( 9) 00:30:32.854 26.531 - 26.647: 97.1922% ( 5) 00:30:32.854 26.647 - 26.764: 97.2522% ( 5) 00:30:32.854 26.764 - 26.880: 97.3122% ( 5) 00:30:32.854 26.880 - 26.996: 97.3962% ( 7) 00:30:32.854 26.996 - 27.113: 97.4682% ( 6) 00:30:32.854 27.113 - 27.229: 97.5522% ( 7) 00:30:32.854 27.229 - 27.345: 97.6002% ( 4) 00:30:32.854 27.345 - 27.462: 97.6242% ( 2) 00:30:32.854 27.462 - 27.578: 97.6602% ( 3) 00:30:32.854 27.578 - 27.695: 97.6962% ( 3) 00:30:32.854 27.695 - 27.811: 97.7322% ( 3) 00:30:32.854 27.811 - 27.927: 97.8162% ( 7) 00:30:32.854 27.927 - 28.044: 97.8762% ( 5) 00:30:32.854 28.044 - 28.160: 97.9602% ( 7) 00:30:32.854 28.160 - 28.276: 98.0082% ( 4) 00:30:32.854 28.276 - 28.393: 98.0922% ( 7) 00:30:32.854 28.393 - 28.509: 98.1162% ( 2) 00:30:32.854 28.509 - 28.625: 98.2241% ( 9) 00:30:32.854 28.625 - 28.742: 98.2961% ( 6) 00:30:32.854 28.742 - 28.858: 98.3201% ( 2) 00:30:32.854 28.858 - 28.975: 98.3921% ( 6) 00:30:32.854 28.975 - 29.091: 98.4521% ( 5) 00:30:32.854 29.091 - 29.207: 98.4761% ( 2) 00:30:32.854 29.207 - 29.324: 98.5241% ( 4) 00:30:32.854 29.324 - 29.440: 98.5601% ( 3) 00:30:32.854 29.440 - 29.556: 98.5961% ( 3) 00:30:32.854 29.556 - 29.673: 98.6441% ( 4) 00:30:32.854 29.673 - 29.789: 98.6681% ( 2) 00:30:32.854 29.789 - 30.022: 98.7761% ( 9) 00:30:32.854 30.022 - 30.255: 98.8361% ( 5) 00:30:32.854 30.255 - 30.487: 98.8961% ( 5) 00:30:32.854 30.487 - 30.720: 98.9321% ( 3) 00:30:32.854 30.720 - 30.953: 99.0041% ( 6) 00:30:32.854 30.953 - 31.185: 99.0521% ( 4) 00:30:32.854 31.185 - 31.418: 99.1001% ( 4) 00:30:32.854 31.418 - 31.651: 99.1601% ( 5) 00:30:32.854 31.651 - 31.884: 99.1961% ( 3) 00:30:32.854 31.884 - 32.116: 99.2681% ( 6) 00:30:32.854 32.116 - 32.349: 99.2801% ( 1) 00:30:32.854 32.349 - 32.582: 99.2921% ( 1) 00:30:32.854 32.582 - 32.815: 99.3401% ( 4) 00:30:32.854 32.815 - 33.047: 99.3641% ( 2) 00:30:32.854 33.047 - 33.280: 99.3760% ( 1) 00:30:32.854 33.280 - 33.513: 99.4000% ( 2) 00:30:32.854 33.513 - 33.745: 99.4120% ( 1) 00:30:32.854 33.745 - 33.978: 99.4240% ( 1) 00:30:32.854 34.211 - 34.444: 99.4360% ( 1) 00:30:32.854 34.444 - 34.676: 99.4720% ( 3) 00:30:32.854 34.676 - 34.909: 99.4960% ( 2) 00:30:32.854 34.909 - 35.142: 99.5200% ( 2) 00:30:32.854 35.142 - 35.375: 99.5320% ( 1) 00:30:32.854 35.375 - 35.607: 99.5440% ( 1) 00:30:32.854 35.607 - 35.840: 99.5560% ( 1) 00:30:32.854 36.305 - 36.538: 99.5680% ( 1) 00:30:32.854 36.538 - 36.771: 99.5800% ( 1) 00:30:32.854 36.771 - 37.004: 99.5920% ( 1) 00:30:32.854 37.236 - 37.469: 99.6040% ( 1) 00:30:32.854 37.469 - 37.702: 99.6280% ( 2) 00:30:32.854 37.935 - 38.167: 99.6520% ( 2) 00:30:32.854 38.633 - 38.865: 99.7000% ( 4) 00:30:32.854 40.029 - 40.262: 99.7240% ( 2) 00:30:32.854 40.727 - 40.960: 99.7480% ( 2) 00:30:32.854 41.425 - 41.658: 99.7600% ( 1) 00:30:32.854 41.658 - 41.891: 99.7720% ( 1) 00:30:32.854 41.891 - 42.124: 99.7960% ( 2) 00:30:32.854 42.124 - 42.356: 99.8080% ( 1) 00:30:32.854 42.356 - 42.589: 99.8320% ( 2) 00:30:32.854 43.287 - 43.520: 99.8440% ( 1) 00:30:32.854 43.753 - 43.985: 99.8560% ( 1) 00:30:32.854 44.451 - 44.684: 99.8680% ( 1) 00:30:32.854 44.916 - 45.149: 99.8800% ( 1) 00:30:32.854 45.149 - 45.382: 99.8920% ( 1) 00:30:32.854 46.545 - 46.778: 99.9040% ( 1) 00:30:32.854 47.011 - 47.244: 99.9160% ( 1) 00:30:32.854 47.942 - 48.175: 99.9280% ( 1) 00:30:32.854 48.640 - 48.873: 99.9400% ( 1) 00:30:32.854 48.873 - 49.105: 99.9520% ( 1) 00:30:32.854 55.622 - 55.855: 99.9640% ( 1) 00:30:32.854 56.320 - 56.553: 99.9760% ( 1) 00:30:32.854 57.018 - 57.251: 99.9880% ( 1) 00:30:32.854 90.764 - 91.229: 100.0000% ( 1) 00:30:32.854 00:30:32.854 ************************************ 00:30:32.854 END TEST nvme_overhead 00:30:32.854 ************************************ 00:30:32.854 00:30:32.854 real 0m1.341s 00:30:32.854 user 0m1.117s 00:30:32.855 sys 0m0.137s 00:30:32.855 13:15:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.855 13:15:51 -- common/autotest_common.sh@10 -- # set +x 00:30:32.855 13:15:51 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:32.855 13:15:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:30:32.855 13:15:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:32.855 13:15:51 -- common/autotest_common.sh@10 -- # set +x 00:30:32.855 ************************************ 00:30:32.855 START TEST nvme_arbitration 00:30:32.855 ************************************ 00:30:32.855 13:15:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:37.037 Initializing NVMe Controllers 00:30:37.037 Attached to 0000:00:06.0 00:30:37.037 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:37.037 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:30:37.037 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:30:37.037 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:30:37.037 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:37.037 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:37.037 Initialization complete. Launching workers. 00:30:37.037 Starting thread on core 1 with urgent priority queue 00:30:37.037 Starting thread on core 2 with urgent priority queue 00:30:37.037 Starting thread on core 0 with urgent priority queue 00:30:37.037 Starting thread on core 3 with urgent priority queue 00:30:37.037 QEMU NVMe Ctrl (12340 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:30:37.037 QEMU NVMe Ctrl (12340 ) core 1: 1088.00 IO/s 91.91 secs/100000 ios 00:30:37.037 QEMU NVMe Ctrl (12340 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:30:37.037 QEMU NVMe Ctrl (12340 ) core 3: 1045.33 IO/s 95.66 secs/100000 ios 00:30:37.037 ======================================================== 00:30:37.037 00:30:37.037 ************************************ 00:30:37.037 END TEST nvme_arbitration 00:30:37.037 ************************************ 00:30:37.037 00:30:37.037 real 0m3.517s 00:30:37.037 user 0m9.509s 00:30:37.037 sys 0m0.116s 00:30:37.037 13:15:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.037 13:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:37.037 13:15:55 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:37.037 13:15:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:37.037 13:15:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:37.037 13:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:37.037 ************************************ 00:30:37.037 START TEST nvme_single_aen 00:30:37.037 ************************************ 00:30:37.037 13:15:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:37.037 [2024-06-11 13:15:55.185178] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:37.037 [2024-06-11 13:15:55.185489] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.037 [2024-06-11 13:15:55.384644] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:37.037 Asynchronous Event Request test 00:30:37.037 Attached to 0000:00:06.0 00:30:37.037 Reset controller to setup AER completions for this process 00:30:37.037 Registering asynchronous event callbacks... 00:30:37.037 Getting orig temperature thresholds of all controllers 00:30:37.037 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:37.037 Setting all controllers temperature threshold low to trigger AER 00:30:37.037 Waiting for all controllers temperature threshold to be set lower 00:30:37.037 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:37.037 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:37.037 Waiting for all controllers to trigger AER and reset threshold 00:30:37.037 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:37.037 Cleaning up... 00:30:37.037 ************************************ 00:30:37.037 END TEST nvme_single_aen 00:30:37.037 ************************************ 00:30:37.037 00:30:37.037 real 0m0.297s 00:30:37.037 user 0m0.130s 00:30:37.037 sys 0m0.098s 00:30:37.037 13:15:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.037 13:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:37.037 13:15:55 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:37.037 13:15:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:37.037 13:15:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:37.037 13:15:55 -- common/autotest_common.sh@10 -- # set +x 00:30:37.037 ************************************ 00:30:37.037 START TEST nvme_doorbell_aers 00:30:37.037 ************************************ 00:30:37.037 13:15:55 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:30:37.037 13:15:55 -- nvme/nvme.sh@70 -- # bdfs=() 00:30:37.037 13:15:55 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:37.037 13:15:55 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:37.037 13:15:55 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:37.037 13:15:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:37.037 13:15:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:37.037 13:15:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:37.037 13:15:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:37.037 13:15:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:37.037 13:15:55 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:37.037 13:15:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:37.037 13:15:55 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:37.037 13:15:55 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:30:37.037 [2024-06-11 13:15:55.854905] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143860) is not found. Dropping the request. 00:30:47.009 Executing: test_write_invalid_db 00:30:47.009 Waiting for AER completion... 00:30:47.009 Failure: test_write_invalid_db 00:30:47.009 00:30:47.009 Executing: test_invalid_db_write_overflow_sq 00:30:47.009 Waiting for AER completion... 00:30:47.009 Failure: test_invalid_db_write_overflow_sq 00:30:47.009 00:30:47.009 Executing: test_invalid_db_write_overflow_cq 00:30:47.009 Waiting for AER completion... 00:30:47.009 Failure: test_invalid_db_write_overflow_cq 00:30:47.009 00:30:47.009 ************************************ 00:30:47.009 END TEST nvme_doorbell_aers 00:30:47.009 ************************************ 00:30:47.009 00:30:47.009 real 0m10.116s 00:30:47.009 user 0m8.287s 00:30:47.009 sys 0m1.751s 00:30:47.009 13:16:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.009 13:16:05 -- common/autotest_common.sh@10 -- # set +x 00:30:47.009 13:16:05 -- nvme/nvme.sh@97 -- # uname 00:30:47.009 13:16:05 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:30:47.009 13:16:05 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:30:47.009 13:16:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:30:47.009 13:16:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.009 13:16:05 -- common/autotest_common.sh@10 -- # set +x 00:30:47.009 ************************************ 00:30:47.009 START TEST nvme_multi_aen 00:30:47.009 ************************************ 00:30:47.009 13:16:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:30:47.009 [2024-06-11 13:16:05.698043] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:47.009 [2024-06-11 13:16:05.698536] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.267 [2024-06-11 13:16:05.893018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:47.268 [2024-06-11 13:16:05.893233] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143860) is not found. Dropping the request. 00:30:47.268 [2024-06-11 13:16:05.893458] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143860) is not found. Dropping the request. 00:30:47.268 [2024-06-11 13:16:05.893541] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143860) is not found. Dropping the request. 00:30:47.268 [2024-06-11 13:16:05.901030] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:47.268 [2024-06-11 13:16:05.901732] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.268 Child process pid: 144069 00:30:47.526 [Child] Asynchronous Event Request test 00:30:47.527 [Child] Attached to 0000:00:06.0 00:30:47.527 [Child] Registering asynchronous event callbacks... 00:30:47.527 [Child] Getting orig temperature thresholds of all controllers 00:30:47.527 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:47.527 [Child] Waiting for all controllers to trigger AER and reset threshold 00:30:47.527 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:47.527 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:47.527 [Child] Cleaning up... 00:30:47.527 Asynchronous Event Request test 00:30:47.527 Attached to 0000:00:06.0 00:30:47.527 Reset controller to setup AER completions for this process 00:30:47.527 Registering asynchronous event callbacks... 00:30:47.527 Getting orig temperature thresholds of all controllers 00:30:47.527 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:47.527 Setting all controllers temperature threshold low to trigger AER 00:30:47.527 Waiting for all controllers temperature threshold to be set lower 00:30:47.527 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:47.527 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:47.527 Waiting for all controllers to trigger AER and reset threshold 00:30:47.527 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:47.527 Cleaning up... 00:30:47.527 ************************************ 00:30:47.527 END TEST nvme_multi_aen 00:30:47.527 ************************************ 00:30:47.527 00:30:47.527 real 0m0.622s 00:30:47.527 user 0m0.210s 00:30:47.527 sys 0m0.242s 00:30:47.527 13:16:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.527 13:16:06 -- common/autotest_common.sh@10 -- # set +x 00:30:47.527 13:16:06 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:30:47.527 13:16:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:30:47.527 13:16:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.527 13:16:06 -- common/autotest_common.sh@10 -- # set +x 00:30:47.527 ************************************ 00:30:47.527 START TEST nvme_startup 00:30:47.527 ************************************ 00:30:47.527 13:16:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:30:47.785 Initializing NVMe Controllers 00:30:47.785 Attached to 0000:00:06.0 00:30:47.785 Initialization complete. 00:30:47.785 Time used:187099.094 (us). 00:30:47.785 ************************************ 00:30:47.785 END TEST nvme_startup 00:30:47.785 ************************************ 00:30:47.785 00:30:47.786 real 0m0.277s 00:30:47.786 user 0m0.087s 00:30:47.786 sys 0m0.118s 00:30:47.786 13:16:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.786 13:16:06 -- common/autotest_common.sh@10 -- # set +x 00:30:48.044 13:16:06 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:30:48.044 13:16:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:48.044 13:16:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:48.044 13:16:06 -- common/autotest_common.sh@10 -- # set +x 00:30:48.044 ************************************ 00:30:48.044 START TEST nvme_multi_secondary 00:30:48.044 ************************************ 00:30:48.044 13:16:06 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:30:48.044 13:16:06 -- nvme/nvme.sh@52 -- # pid0=144135 00:30:48.044 13:16:06 -- nvme/nvme.sh@54 -- # pid1=144136 00:30:48.044 13:16:06 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:30:48.044 13:16:06 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:30:48.044 13:16:06 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:30:51.326 Initializing NVMe Controllers 00:30:51.326 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:51.326 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:30:51.326 Initialization complete. Launching workers. 00:30:51.326 ======================================================== 00:30:51.326 Latency(us) 00:30:51.326 Device Information : IOPS MiB/s Average min max 00:30:51.326 PCIE (0000:00:06.0) NSID 1 from core 2: 14134.00 55.21 1131.46 150.97 20815.26 00:30:51.326 ======================================================== 00:30:51.326 Total : 14134.00 55.21 1131.46 150.97 20815.26 00:30:51.326 00:30:51.326 13:16:10 -- nvme/nvme.sh@56 -- # wait 144135 00:30:51.326 Initializing NVMe Controllers 00:30:51.326 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:51.326 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:30:51.326 Initialization complete. Launching workers. 00:30:51.326 ======================================================== 00:30:51.326 Latency(us) 00:30:51.326 Device Information : IOPS MiB/s Average min max 00:30:51.326 PCIE (0000:00:06.0) NSID 1 from core 1: 34007.00 132.84 470.16 115.44 3485.14 00:30:51.326 ======================================================== 00:30:51.326 Total : 34007.00 132.84 470.16 115.44 3485.14 00:30:51.326 00:30:53.851 Initializing NVMe Controllers 00:30:53.851 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:53.851 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:53.851 Initialization complete. Launching workers. 00:30:53.851 ======================================================== 00:30:53.851 Latency(us) 00:30:53.851 Device Information : IOPS MiB/s Average min max 00:30:53.851 PCIE (0000:00:06.0) NSID 1 from core 0: 40879.20 159.68 391.08 120.90 1432.47 00:30:53.851 ======================================================== 00:30:53.851 Total : 40879.20 159.68 391.08 120.90 1432.47 00:30:53.851 00:30:53.851 13:16:12 -- nvme/nvme.sh@57 -- # wait 144136 00:30:53.851 13:16:12 -- nvme/nvme.sh@61 -- # pid0=144229 00:30:53.851 13:16:12 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:30:53.851 13:16:12 -- nvme/nvme.sh@63 -- # pid1=144230 00:30:53.851 13:16:12 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:30:53.851 13:16:12 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:30:57.130 Initializing NVMe Controllers 00:30:57.130 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:57.130 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:57.130 Initialization complete. Launching workers. 00:30:57.130 ======================================================== 00:30:57.130 Latency(us) 00:30:57.130 Device Information : IOPS MiB/s Average min max 00:30:57.130 PCIE (0000:00:06.0) NSID 1 from core 0: 31915.07 124.67 500.98 118.88 2366.44 00:30:57.130 ======================================================== 00:30:57.130 Total : 31915.07 124.67 500.98 118.88 2366.44 00:30:57.130 00:30:57.130 Initializing NVMe Controllers 00:30:57.130 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:57.130 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:30:57.130 Initialization complete. Launching workers. 00:30:57.130 ======================================================== 00:30:57.130 Latency(us) 00:30:57.130 Device Information : IOPS MiB/s Average min max 00:30:57.130 PCIE (0000:00:06.0) NSID 1 from core 1: 32336.51 126.31 494.47 139.52 2740.06 00:30:57.130 ======================================================== 00:30:57.130 Total : 32336.51 126.31 494.47 139.52 2740.06 00:30:57.130 00:30:59.661 Initializing NVMe Controllers 00:30:59.661 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:59.661 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:30:59.661 Initialization complete. Launching workers. 00:30:59.662 ======================================================== 00:30:59.662 Latency(us) 00:30:59.662 Device Information : IOPS MiB/s Average min max 00:30:59.662 PCIE (0000:00:06.0) NSID 1 from core 2: 17346.96 67.76 921.90 143.88 28719.48 00:30:59.662 ======================================================== 00:30:59.662 Total : 17346.96 67.76 921.90 143.88 28719.48 00:30:59.662 00:30:59.662 ************************************ 00:30:59.662 END TEST nvme_multi_secondary 00:30:59.662 ************************************ 00:30:59.662 13:16:17 -- nvme/nvme.sh@65 -- # wait 144229 00:30:59.662 13:16:17 -- nvme/nvme.sh@66 -- # wait 144230 00:30:59.662 00:30:59.662 real 0m11.267s 00:30:59.662 user 0m18.727s 00:30:59.662 sys 0m0.833s 00:30:59.662 13:16:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:59.662 13:16:17 -- common/autotest_common.sh@10 -- # set +x 00:30:59.662 13:16:17 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:30:59.662 13:16:17 -- nvme/nvme.sh@102 -- # kill_stub 00:30:59.662 13:16:17 -- common/autotest_common.sh@1065 -- # [[ -e /proc/143392 ]] 00:30:59.662 13:16:17 -- common/autotest_common.sh@1066 -- # kill 143392 00:30:59.662 13:16:17 -- common/autotest_common.sh@1067 -- # wait 143392 00:30:59.920 [2024-06-11 13:16:18.623138] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144068) is not found. Dropping the request. 00:30:59.920 [2024-06-11 13:16:18.623455] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144068) is not found. Dropping the request. 00:30:59.921 [2024-06-11 13:16:18.623627] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144068) is not found. Dropping the request. 00:30:59.921 [2024-06-11 13:16:18.623797] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144068) is not found. Dropping the request. 00:31:00.180 13:16:18 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:31:00.180 13:16:18 -- common/autotest_common.sh@1073 -- # echo 2 00:31:00.180 13:16:18 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:00.180 13:16:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:00.180 13:16:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:00.180 13:16:18 -- common/autotest_common.sh@10 -- # set +x 00:31:00.180 ************************************ 00:31:00.180 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:00.180 ************************************ 00:31:00.180 13:16:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:00.180 * Looking for test storage... 00:31:00.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:00.180 13:16:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:00.180 13:16:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:00.180 13:16:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:00.180 13:16:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:00.180 13:16:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:00.180 13:16:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:00.180 13:16:18 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:00.180 13:16:18 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:00.180 13:16:18 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:00.180 13:16:18 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:00.180 13:16:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:00.180 13:16:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:00.180 13:16:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:00.180 13:16:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:00.180 13:16:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:00.439 13:16:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:00.439 13:16:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:00.439 13:16:19 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:00.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.439 13:16:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:31:00.439 13:16:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:31:00.439 13:16:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=144395 00:31:00.439 13:16:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:00.439 13:16:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:00.439 13:16:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 144395 00:31:00.439 13:16:19 -- common/autotest_common.sh@819 -- # '[' -z 144395 ']' 00:31:00.439 13:16:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.439 13:16:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:00.439 13:16:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.439 13:16:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:00.439 13:16:19 -- common/autotest_common.sh@10 -- # set +x 00:31:00.439 [2024-06-11 13:16:19.120604] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:00.439 [2024-06-11 13:16:19.120983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144395 ] 00:31:00.698 [2024-06-11 13:16:19.332734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.957 [2024-06-11 13:16:19.591316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:00.957 [2024-06-11 13:16:19.591929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.957 [2024-06-11 13:16:19.592044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.957 [2024-06-11 13:16:19.592163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.957 [2024-06-11 13:16:19.592165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.332 13:16:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:02.332 13:16:20 -- common/autotest_common.sh@852 -- # return 0 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:31:02.332 13:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.332 13:16:20 -- common/autotest_common.sh@10 -- # set +x 00:31:02.332 nvme0n1 00:31:02.332 13:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_pse73.txt 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:02.332 13:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.332 13:16:20 -- common/autotest_common.sh@10 -- # set +x 00:31:02.332 true 00:31:02.332 13:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1718111780 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=144441 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:02.332 13:16:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:04.233 13:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:04.233 13:16:22 -- common/autotest_common.sh@10 -- # set +x 00:31:04.233 [2024-06-11 13:16:22.899055] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:04.233 [2024-06-11 13:16:22.899671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.233 [2024-06-11 13:16:22.899908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:04.233 [2024-06-11 13:16:22.900056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.233 [2024-06-11 13:16:22.902241] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:04.233 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 144441 00:31:04.233 13:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 144441 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 144441 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.233 13:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:04.233 13:16:22 -- common/autotest_common.sh@10 -- # set +x 00:31:04.233 13:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_pse73.txt 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:04.233 13:16:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_pse73.txt 00:31:04.233 13:16:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 144395 00:31:04.233 13:16:23 -- common/autotest_common.sh@926 -- # '[' -z 144395 ']' 00:31:04.233 13:16:23 -- common/autotest_common.sh@930 -- # kill -0 144395 00:31:04.233 13:16:23 -- common/autotest_common.sh@931 -- # uname 00:31:04.233 13:16:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:04.233 13:16:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144395 00:31:04.233 killing process with pid 144395 00:31:04.233 13:16:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:04.233 13:16:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:04.233 13:16:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144395' 00:31:04.233 13:16:23 -- common/autotest_common.sh@945 -- # kill 144395 00:31:04.233 13:16:23 -- common/autotest_common.sh@950 -- # wait 144395 00:31:06.776 13:16:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:06.776 13:16:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:06.776 ************************************ 00:31:06.776 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:06.776 00:31:06.776 real 0m6.230s 00:31:06.776 user 0m22.101s 00:31:06.776 sys 0m0.656s 00:31:06.776 13:16:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.776 13:16:25 -- common/autotest_common.sh@10 -- # set +x 00:31:06.776 ************************************ 00:31:06.776 13:16:25 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:06.776 13:16:25 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:06.776 13:16:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:06.776 13:16:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:06.776 13:16:25 -- common/autotest_common.sh@10 -- # set +x 00:31:06.776 ************************************ 00:31:06.776 START TEST nvme_fio 00:31:06.776 ************************************ 00:31:06.776 13:16:25 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:31:06.776 13:16:25 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:06.776 13:16:25 -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:06.776 13:16:25 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:31:06.776 13:16:25 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:06.776 13:16:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:06.776 13:16:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:06.776 13:16:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:06.776 13:16:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:06.776 13:16:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:06.776 13:16:25 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:06.776 13:16:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:06.776 13:16:25 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:06.776 13:16:25 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:06.776 13:16:25 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:06.776 13:16:25 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:06.776 13:16:25 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:06.776 13:16:25 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:07.052 13:16:25 -- nvme/nvme.sh@41 -- # bs=4096 00:31:07.052 13:16:25 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:07.052 13:16:25 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:07.052 13:16:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:07.052 13:16:25 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:31:07.052 13:16:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:07.052 13:16:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:07.052 13:16:25 -- common/autotest_common.sh@1320 -- # shift 00:31:07.052 13:16:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:07.052 13:16:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.052 13:16:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:07.052 13:16:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:07.052 13:16:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:07.052 13:16:25 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:31:07.052 13:16:25 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:31:07.052 13:16:25 -- common/autotest_common.sh@1326 -- # break 00:31:07.052 13:16:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:07.052 13:16:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:07.052 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:07.052 fio-3.35 00:31:07.052 Starting 1 thread 00:31:10.335 00:31:10.335 test: (groupid=0, jobs=1): err= 0: pid=144591: Tue Jun 11 13:16:29 2024 00:31:10.335 read: IOPS=17.7k, BW=69.2MiB/s (72.6MB/s)(139MiB/2001msec) 00:31:10.335 slat (nsec): min=3938, max=81442, avg=5666.26, stdev=3087.79 00:31:10.335 clat (usec): min=264, max=7511, avg=3586.87, stdev=312.97 00:31:10.335 lat (usec): min=268, max=7583, avg=3592.54, stdev=313.21 00:31:10.335 clat percentiles (usec): 00:31:10.335 | 1.00th=[ 2999], 5.00th=[ 3163], 10.00th=[ 3228], 20.00th=[ 3326], 00:31:10.335 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3654], 00:31:10.335 | 70.00th=[ 3720], 80.00th=[ 3851], 90.00th=[ 3982], 95.00th=[ 4113], 00:31:10.335 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4752], 99.95th=[ 6063], 00:31:10.335 | 99.99th=[ 6718] 00:31:10.335 bw ( KiB/s): min=69128, max=71992, per=99.16%, avg=70317.33, stdev=1492.41, samples=3 00:31:10.335 iops : min=17282, max=17998, avg=17579.33, stdev=373.10, samples=3 00:31:10.335 write: IOPS=17.7k, BW=69.2MiB/s (72.6MB/s)(139MiB/2001msec); 0 zone resets 00:31:10.335 slat (nsec): min=4015, max=67453, avg=5865.46, stdev=3228.09 00:31:10.335 clat (usec): min=245, max=6705, avg=3606.79, stdev=311.34 00:31:10.335 lat (usec): min=250, max=6733, avg=3612.66, stdev=311.57 00:31:10.335 clat percentiles (usec): 00:31:10.335 | 1.00th=[ 3032], 5.00th=[ 3163], 10.00th=[ 3261], 20.00th=[ 3359], 00:31:10.335 | 30.00th=[ 3425], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3654], 00:31:10.335 | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 4015], 95.00th=[ 4113], 00:31:10.335 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 5014], 99.95th=[ 6128], 00:31:10.335 | 99.99th=[ 6587] 00:31:10.335 bw ( KiB/s): min=69304, max=71912, per=99.19%, avg=70317.33, stdev=1397.81, samples=3 00:31:10.335 iops : min=17326, max=17978, avg=17579.33, stdev=349.45, samples=3 00:31:10.335 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:10.335 lat (msec) : 2=0.05%, 4=90.30%, 10=9.61% 00:31:10.335 cpu : usr=99.95%, sys=0.00%, ctx=2, majf=0, minf=35 00:31:10.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:10.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:10.335 issued rwts: total=35473,35465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:10.335 00:31:10.335 Run status group 0 (all jobs): 00:31:10.335 READ: bw=69.2MiB/s (72.6MB/s), 69.2MiB/s-69.2MiB/s (72.6MB/s-72.6MB/s), io=139MiB (145MB), run=2001-2001msec 00:31:10.335 WRITE: bw=69.2MiB/s (72.6MB/s), 69.2MiB/s-69.2MiB/s (72.6MB/s-72.6MB/s), io=139MiB (145MB), run=2001-2001msec 00:31:10.901 ----------------------------------------------------- 00:31:10.901 Suppressions used: 00:31:10.901 count bytes template 00:31:10.901 1 32 /usr/src/fio/parse.c 00:31:10.901 ----------------------------------------------------- 00:31:10.901 00:31:10.901 ************************************ 00:31:10.901 END TEST nvme_fio 00:31:10.901 ************************************ 00:31:10.901 13:16:29 -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:10.901 13:16:29 -- nvme/nvme.sh@46 -- # true 00:31:10.901 00:31:10.901 real 0m4.322s 00:31:10.901 user 0m3.615s 00:31:10.901 sys 0m0.368s 00:31:10.901 13:16:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:10.901 13:16:29 -- common/autotest_common.sh@10 -- # set +x 00:31:10.901 ************************************ 00:31:10.901 END TEST nvme 00:31:10.901 ************************************ 00:31:10.901 00:31:10.901 real 0m48.764s 00:31:10.901 user 2m8.992s 00:31:10.901 sys 0m8.430s 00:31:10.901 13:16:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:10.901 13:16:29 -- common/autotest_common.sh@10 -- # set +x 00:31:10.901 13:16:29 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:31:10.901 13:16:29 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:10.901 13:16:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:10.901 13:16:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:10.901 13:16:29 -- common/autotest_common.sh@10 -- # set +x 00:31:10.901 ************************************ 00:31:10.901 START TEST nvme_scc 00:31:10.901 ************************************ 00:31:10.901 13:16:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:10.901 * Looking for test storage... 00:31:10.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:10.901 13:16:29 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:10.901 13:16:29 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:10.901 13:16:29 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:10.901 13:16:29 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:10.901 13:16:29 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:10.901 13:16:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.901 13:16:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.901 13:16:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.901 13:16:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:10.902 13:16:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:10.902 13:16:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:10.902 13:16:29 -- paths/export.sh@5 -- # export PATH 00:31:10.902 13:16:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:10.902 13:16:29 -- nvme/functions.sh@10 -- # ctrls=() 00:31:10.902 13:16:29 -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:10.902 13:16:29 -- nvme/functions.sh@11 -- # nvmes=() 00:31:10.902 13:16:29 -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:10.902 13:16:29 -- nvme/functions.sh@12 -- # bdfs=() 00:31:10.902 13:16:29 -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:10.902 13:16:29 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:10.902 13:16:29 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:10.902 13:16:29 -- nvme/functions.sh@14 -- # nvme_name= 00:31:10.902 13:16:29 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:10.902 13:16:29 -- nvme/nvme_scc.sh@12 -- # uname 00:31:10.902 13:16:29 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:31:10.902 13:16:29 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:31:10.902 13:16:29 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:11.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:11.159 Waiting for block devices as requested 00:31:11.420 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:11.420 13:16:30 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:31:11.420 13:16:30 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:11.420 13:16:30 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:11.420 13:16:30 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:11.420 13:16:30 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:31:11.420 13:16:30 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:31:11.420 13:16:30 -- scripts/common.sh@15 -- # local i 00:31:11.420 13:16:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:31:11.420 13:16:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:11.421 13:16:30 -- scripts/common.sh@24 -- # return 0 00:31:11.421 13:16:30 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:11.421 13:16:30 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:11.421 13:16:30 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@18 -- # shift 00:31:11.421 13:16:30 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.421 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.421 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:11.421 13:16:30 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:11.422 13:16:30 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.422 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.422 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.423 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.423 13:16:30 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:11.423 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:11.424 13:16:30 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:11.424 13:16:30 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:11.424 13:16:30 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:11.424 13:16:30 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@18 -- # shift 00:31:11.424 13:16:30 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.424 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.424 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.424 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:11.425 13:16:30 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # IFS=: 00:31:11.425 13:16:30 -- nvme/functions.sh@21 -- # read -r reg val 00:31:11.425 13:16:30 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:11.425 13:16:30 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:11.425 13:16:30 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:11.425 13:16:30 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:31:11.425 13:16:30 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:11.425 13:16:30 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:31:11.425 13:16:30 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:31:11.425 13:16:30 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:31:11.425 13:16:30 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:31:11.425 13:16:30 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:31:11.425 13:16:30 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:31:11.425 13:16:30 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:31:11.425 13:16:30 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:31:11.425 13:16:30 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:31:11.425 13:16:30 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:31:11.425 13:16:30 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:31:11.425 13:16:30 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:31:11.425 13:16:30 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:31:11.425 13:16:30 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:31:11.425 13:16:30 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:31:11.425 13:16:30 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:31:11.425 13:16:30 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:11.425 13:16:30 -- nvme/functions.sh@76 -- # echo 0x15d 00:31:11.425 13:16:30 -- nvme/functions.sh@184 -- # oncs=0x15d 00:31:11.425 13:16:30 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:31:11.425 13:16:30 -- nvme/functions.sh@197 -- # echo nvme0 00:31:11.425 13:16:30 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:31:11.425 13:16:30 -- nvme/functions.sh@206 -- # echo nvme0 00:31:11.425 13:16:30 -- nvme/functions.sh@207 -- # return 0 00:31:11.425 13:16:30 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:31:11.425 13:16:30 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:31:11.425 13:16:30 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:11.684 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:11.942 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:12.876 13:16:31 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:12.876 13:16:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:12.876 13:16:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:12.876 13:16:31 -- common/autotest_common.sh@10 -- # set +x 00:31:12.876 ************************************ 00:31:12.876 START TEST nvme_simple_copy 00:31:12.876 ************************************ 00:31:12.876 13:16:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:13.134 Initializing NVMe Controllers 00:31:13.134 Attaching to 0000:00:06.0 00:31:13.134 Controller supports SCC. Attached to 0000:00:06.0 00:31:13.134 Namespace ID: 1 size: 5GB 00:31:13.134 Initialization complete. 00:31:13.134 00:31:13.134 Controller QEMU NVMe Ctrl (12340 ) 00:31:13.134 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:31:13.134 Namespace Block Size:4096 00:31:13.134 Writing LBAs 0 to 63 with Random Data 00:31:13.134 Copied LBAs from 0 - 63 to the Destination LBA 256 00:31:13.134 LBAs matching Written Data: 64 00:31:13.393 ************************************ 00:31:13.393 END TEST nvme_simple_copy 00:31:13.393 ************************************ 00:31:13.393 00:31:13.393 real 0m0.296s 00:31:13.393 user 0m0.115s 00:31:13.393 sys 0m0.083s 00:31:13.393 13:16:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.393 13:16:31 -- common/autotest_common.sh@10 -- # set +x 00:31:13.393 ************************************ 00:31:13.393 END TEST nvme_scc 00:31:13.393 ************************************ 00:31:13.393 00:31:13.393 real 0m2.440s 00:31:13.393 user 0m0.708s 00:31:13.393 sys 0m1.590s 00:31:13.393 13:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.393 13:16:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.393 13:16:32 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:31:13.393 13:16:32 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:31:13.393 13:16:32 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:31:13.393 13:16:32 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:31:13.393 13:16:32 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:31:13.393 13:16:32 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:13.393 13:16:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:13.393 13:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:13.393 13:16:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.393 ************************************ 00:31:13.393 START TEST nvme_rpc 00:31:13.393 ************************************ 00:31:13.393 13:16:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:13.393 * Looking for test storage... 00:31:13.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:13.393 13:16:32 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:13.393 13:16:32 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:31:13.393 13:16:32 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:13.393 13:16:32 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:13.393 13:16:32 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:13.393 13:16:32 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:13.393 13:16:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:13.393 13:16:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:13.393 13:16:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:13.393 13:16:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:13.393 13:16:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:13.393 13:16:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:13.393 13:16:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:13.393 13:16:32 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:13.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.393 13:16:32 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:31:13.393 13:16:32 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=145088 00:31:13.393 13:16:32 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:13.393 13:16:32 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:31:13.393 13:16:32 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 145088 00:31:13.393 13:16:32 -- common/autotest_common.sh@819 -- # '[' -z 145088 ']' 00:31:13.393 13:16:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.393 13:16:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:13.393 13:16:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.393 13:16:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:13.393 13:16:32 -- common/autotest_common.sh@10 -- # set +x 00:31:13.650 [2024-06-11 13:16:32.286921] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:13.650 [2024-06-11 13:16:32.287362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145088 ] 00:31:13.650 [2024-06-11 13:16:32.465886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:13.908 [2024-06-11 13:16:32.730036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:13.908 [2024-06-11 13:16:32.730654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.908 [2024-06-11 13:16:32.730666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.280 13:16:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:15.280 13:16:33 -- common/autotest_common.sh@852 -- # return 0 00:31:15.280 13:16:33 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:31:15.538 Nvme0n1 00:31:15.538 13:16:34 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:31:15.538 13:16:34 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:31:15.797 request: 00:31:15.797 { 00:31:15.797 "filename": "non_existing_file", 00:31:15.797 "bdev_name": "Nvme0n1", 00:31:15.797 "method": "bdev_nvme_apply_firmware", 00:31:15.797 "req_id": 1 00:31:15.797 } 00:31:15.797 Got JSON-RPC error response 00:31:15.797 response: 00:31:15.797 { 00:31:15.797 "code": -32603, 00:31:15.797 "message": "open file failed." 00:31:15.797 } 00:31:15.797 13:16:34 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:31:15.797 13:16:34 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:31:15.797 13:16:34 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:16.366 13:16:34 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:16.366 13:16:34 -- nvme/nvme_rpc.sh@40 -- # killprocess 145088 00:31:16.366 13:16:34 -- common/autotest_common.sh@926 -- # '[' -z 145088 ']' 00:31:16.366 13:16:34 -- common/autotest_common.sh@930 -- # kill -0 145088 00:31:16.366 13:16:34 -- common/autotest_common.sh@931 -- # uname 00:31:16.366 13:16:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:16.366 13:16:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145088 00:31:16.366 killing process with pid 145088 00:31:16.366 13:16:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:16.366 13:16:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:16.366 13:16:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145088' 00:31:16.366 13:16:34 -- common/autotest_common.sh@945 -- # kill 145088 00:31:16.366 13:16:34 -- common/autotest_common.sh@950 -- # wait 145088 00:31:18.268 ************************************ 00:31:18.268 END TEST nvme_rpc 00:31:18.268 ************************************ 00:31:18.268 00:31:18.268 real 0m4.945s 00:31:18.268 user 0m9.740s 00:31:18.268 sys 0m0.659s 00:31:18.268 13:16:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.268 13:16:37 -- common/autotest_common.sh@10 -- # set +x 00:31:18.268 13:16:37 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:18.268 13:16:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:18.268 13:16:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:18.268 13:16:37 -- common/autotest_common.sh@10 -- # set +x 00:31:18.268 ************************************ 00:31:18.268 START TEST nvme_rpc_timeouts 00:31:18.268 ************************************ 00:31:18.268 13:16:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:18.526 * Looking for test storage... 00:31:18.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:18.526 13:16:37 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:18.526 13:16:37 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_145181 00:31:18.526 13:16:37 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_145181 00:31:18.526 13:16:37 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=145211 00:31:18.526 13:16:37 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:31:18.526 13:16:37 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:18.526 13:16:37 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 145211 00:31:18.526 13:16:37 -- common/autotest_common.sh@819 -- # '[' -z 145211 ']' 00:31:18.526 13:16:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.527 13:16:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:18.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.527 13:16:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.527 13:16:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:18.527 13:16:37 -- common/autotest_common.sh@10 -- # set +x 00:31:18.527 [2024-06-11 13:16:37.223305] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:18.527 [2024-06-11 13:16:37.223484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145211 ] 00:31:18.785 [2024-06-11 13:16:37.399868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:19.044 [2024-06-11 13:16:37.656599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:19.044 [2024-06-11 13:16:37.657036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.044 [2024-06-11 13:16:37.657050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.419 13:16:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:20.419 13:16:38 -- common/autotest_common.sh@852 -- # return 0 00:31:20.419 Checking default timeout settings: 00:31:20.419 13:16:38 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:31:20.420 13:16:38 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:20.420 Making settings changes with rpc: 00:31:20.420 13:16:39 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:31:20.420 13:16:39 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:31:20.678 Check default vs. modified settings: 00:31:20.678 13:16:39 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:31:20.678 13:16:39 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_145181 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_145181 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:31:20.936 Setting action_on_timeout is changed as expected. 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_145181 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_145181 00:31:20.936 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:31:20.937 Setting timeout_us is changed as expected. 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_145181 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_145181 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:20.937 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:21.195 13:16:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:31:21.195 Setting timeout_admin_us is changed as expected. 00:31:21.195 13:16:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:31:21.195 13:16:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:31:21.195 13:16:39 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:31:21.195 13:16:39 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_145181 /tmp/settings_modified_145181 00:31:21.195 13:16:39 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 145211 00:31:21.195 13:16:39 -- common/autotest_common.sh@926 -- # '[' -z 145211 ']' 00:31:21.195 13:16:39 -- common/autotest_common.sh@930 -- # kill -0 145211 00:31:21.195 13:16:39 -- common/autotest_common.sh@931 -- # uname 00:31:21.195 13:16:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:21.195 13:16:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145211 00:31:21.195 13:16:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:21.195 13:16:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:21.195 killing process with pid 145211 00:31:21.195 13:16:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145211' 00:31:21.195 13:16:39 -- common/autotest_common.sh@945 -- # kill 145211 00:31:21.195 13:16:39 -- common/autotest_common.sh@950 -- # wait 145211 00:31:23.725 RPC TIMEOUT SETTING TEST PASSED. 00:31:23.725 13:16:41 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:31:23.725 00:31:23.725 real 0m4.915s 00:31:23.725 user 0m9.489s 00:31:23.725 sys 0m0.667s 00:31:23.725 ************************************ 00:31:23.725 END TEST nvme_rpc_timeouts 00:31:23.725 ************************************ 00:31:23.725 13:16:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.725 13:16:41 -- common/autotest_common.sh@10 -- # set +x 00:31:23.725 13:16:42 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:31:23.725 13:16:42 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:31:23.725 13:16:42 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:31:23.725 13:16:42 -- spdk/autotest.sh@268 -- # timing_exit lib 00:31:23.725 13:16:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:23.726 13:16:42 -- common/autotest_common.sh@10 -- # set +x 00:31:23.726 13:16:42 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:23.726 13:16:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:23.726 13:16:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:23.726 13:16:42 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:23.726 13:16:42 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:31:23.726 13:16:42 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:23.726 13:16:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:23.726 13:16:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:23.726 13:16:42 -- common/autotest_common.sh@10 -- # set +x 00:31:23.726 ************************************ 00:31:23.726 START TEST blockdev_raid5f 00:31:23.726 ************************************ 00:31:23.726 13:16:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:23.726 * Looking for test storage... 00:31:23.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:23.726 13:16:42 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:23.726 13:16:42 -- bdev/nbd_common.sh@6 -- # set -e 00:31:23.726 13:16:42 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:23.726 13:16:42 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:23.726 13:16:42 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:23.726 13:16:42 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:23.726 13:16:42 -- bdev/blockdev.sh@18 -- # : 00:31:23.726 13:16:42 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:23.726 13:16:42 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:23.726 13:16:42 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:23.726 13:16:42 -- bdev/blockdev.sh@672 -- # uname -s 00:31:23.726 13:16:42 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:23.726 13:16:42 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:23.726 13:16:42 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:31:23.726 13:16:42 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:23.726 13:16:42 -- bdev/blockdev.sh@682 -- # dek= 00:31:23.726 13:16:42 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:23.726 13:16:42 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:23.726 13:16:42 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:23.726 13:16:42 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:31:23.726 13:16:42 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:31:23.726 13:16:42 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:23.726 13:16:42 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=145378 00:31:23.726 13:16:42 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:23.726 13:16:42 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:23.726 13:16:42 -- bdev/blockdev.sh@47 -- # waitforlisten 145378 00:31:23.726 13:16:42 -- common/autotest_common.sh@819 -- # '[' -z 145378 ']' 00:31:23.726 13:16:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.726 13:16:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:23.726 13:16:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.726 13:16:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:23.726 13:16:42 -- common/autotest_common.sh@10 -- # set +x 00:31:23.726 [2024-06-11 13:16:42.222569] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:23.726 [2024-06-11 13:16:42.222729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145378 ] 00:31:23.726 [2024-06-11 13:16:42.385457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.985 [2024-06-11 13:16:42.611892] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:23.985 [2024-06-11 13:16:42.612200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.404 13:16:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:25.404 13:16:43 -- common/autotest_common.sh@852 -- # return 0 00:31:25.404 13:16:43 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:25.404 13:16:43 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:31:25.404 13:16:43 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:31:25.404 13:16:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.404 13:16:43 -- common/autotest_common.sh@10 -- # set +x 00:31:25.404 Malloc0 00:31:25.404 Malloc1 00:31:25.404 Malloc2 00:31:25.404 13:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.404 13:16:44 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:25.404 13:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.404 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:31:25.404 13:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.404 13:16:44 -- bdev/blockdev.sh@738 -- # cat 00:31:25.404 13:16:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:25.404 13:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.404 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:31:25.404 13:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.404 13:16:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:25.404 13:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.404 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:31:25.404 13:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.404 13:16:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:25.404 13:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.404 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:31:25.404 13:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.404 13:16:44 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:25.404 13:16:44 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:25.404 13:16:44 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:25.404 13:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.404 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:31:25.404 13:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.404 13:16:44 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:25.404 13:16:44 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:25.404 13:16:44 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cb03fd7a-2ff9-4e0a-b9a3-13f76f8662f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cb03fd7a-2ff9-4e0a-b9a3-13f76f8662f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cb03fd7a-2ff9-4e0a-b9a3-13f76f8662f3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "a70c4d8b-e411-44c0-b488-788228d9f149",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "753b2811-4a84-47c4-9d16-11ac6032f564",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "24909e57-0e8f-47e1-87ff-2c5214959ccb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:25.663 13:16:44 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:25.663 13:16:44 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:31:25.663 13:16:44 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:25.663 13:16:44 -- bdev/blockdev.sh@752 -- # killprocess 145378 00:31:25.663 13:16:44 -- common/autotest_common.sh@926 -- # '[' -z 145378 ']' 00:31:25.663 13:16:44 -- common/autotest_common.sh@930 -- # kill -0 145378 00:31:25.663 13:16:44 -- common/autotest_common.sh@931 -- # uname 00:31:25.663 13:16:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:25.663 13:16:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145378 00:31:25.663 13:16:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:25.663 killing process with pid 145378 00:31:25.663 13:16:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:25.663 13:16:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145378' 00:31:25.663 13:16:44 -- common/autotest_common.sh@945 -- # kill 145378 00:31:25.663 13:16:44 -- common/autotest_common.sh@950 -- # wait 145378 00:31:28.195 13:16:46 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:28.195 13:16:46 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:28.195 13:16:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:28.195 13:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:28.195 13:16:46 -- common/autotest_common.sh@10 -- # set +x 00:31:28.195 ************************************ 00:31:28.195 START TEST bdev_hello_world 00:31:28.195 ************************************ 00:31:28.195 13:16:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:28.195 [2024-06-11 13:16:46.754341] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:28.195 [2024-06-11 13:16:46.754555] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145478 ] 00:31:28.195 [2024-06-11 13:16:46.924259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.454 [2024-06-11 13:16:47.145012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.022 [2024-06-11 13:16:47.662987] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:29.022 [2024-06-11 13:16:47.663085] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:31:29.022 [2024-06-11 13:16:47.663117] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:29.022 [2024-06-11 13:16:47.663667] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:29.022 [2024-06-11 13:16:47.663829] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:29.022 [2024-06-11 13:16:47.663869] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:29.022 [2024-06-11 13:16:47.663943] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:29.022 00:31:29.022 [2024-06-11 13:16:47.663980] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:30.397 00:31:30.397 real 0m2.266s 00:31:30.397 user 0m1.831s 00:31:30.397 sys 0m0.316s 00:31:30.397 ************************************ 00:31:30.397 END TEST bdev_hello_world 00:31:30.397 ************************************ 00:31:30.397 13:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.397 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:31:30.397 13:16:48 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:30.397 13:16:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:30.397 13:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:30.397 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:31:30.397 ************************************ 00:31:30.397 START TEST bdev_bounds 00:31:30.397 ************************************ 00:31:30.397 13:16:49 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:31:30.397 13:16:49 -- bdev/blockdev.sh@288 -- # bdevio_pid=145522 00:31:30.397 Process bdevio pid: 145522 00:31:30.398 13:16:49 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:30.398 13:16:49 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 145522' 00:31:30.398 13:16:49 -- bdev/blockdev.sh@291 -- # waitforlisten 145522 00:31:30.398 13:16:49 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:30.398 13:16:49 -- common/autotest_common.sh@819 -- # '[' -z 145522 ']' 00:31:30.398 13:16:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.398 13:16:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:30.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.398 13:16:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.398 13:16:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:30.398 13:16:49 -- common/autotest_common.sh@10 -- # set +x 00:31:30.398 [2024-06-11 13:16:49.062898] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:30.398 [2024-06-11 13:16:49.063063] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145522 ] 00:31:30.398 [2024-06-11 13:16:49.230265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:30.656 [2024-06-11 13:16:49.420110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.656 [2024-06-11 13:16:49.420274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.656 [2024-06-11 13:16:49.420283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.224 13:16:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:31.224 13:16:50 -- common/autotest_common.sh@852 -- # return 0 00:31:31.224 13:16:50 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:31.483 I/O targets: 00:31:31.483 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:31:31.483 00:31:31.483 00:31:31.483 CUnit - A unit testing framework for C - Version 2.1-3 00:31:31.483 http://cunit.sourceforge.net/ 00:31:31.483 00:31:31.483 00:31:31.483 Suite: bdevio tests on: raid5f 00:31:31.483 Test: blockdev write read block ...passed 00:31:31.483 Test: blockdev write zeroes read block ...passed 00:31:31.483 Test: blockdev write zeroes read no split ...passed 00:31:31.483 Test: blockdev write zeroes read split ...passed 00:31:31.742 Test: blockdev write zeroes read split partial ...passed 00:31:31.742 Test: blockdev reset ...passed 00:31:31.742 Test: blockdev write read 8 blocks ...passed 00:31:31.742 Test: blockdev write read size > 128k ...passed 00:31:31.742 Test: blockdev write read invalid size ...passed 00:31:31.742 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:31.742 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:31.742 Test: blockdev write read max offset ...passed 00:31:31.742 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:31.742 Test: blockdev writev readv 8 blocks ...passed 00:31:31.742 Test: blockdev writev readv 30 x 1block ...passed 00:31:31.742 Test: blockdev writev readv block ...passed 00:31:31.742 Test: blockdev writev readv size > 128k ...passed 00:31:31.742 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:31.742 Test: blockdev comparev and writev ...passed 00:31:31.742 Test: blockdev nvme passthru rw ...passed 00:31:31.742 Test: blockdev nvme passthru vendor specific ...passed 00:31:31.742 Test: blockdev nvme admin passthru ...passed 00:31:31.742 Test: blockdev copy ...passed 00:31:31.742 00:31:31.742 Run Summary: Type Total Ran Passed Failed Inactive 00:31:31.742 suites 1 1 n/a 0 0 00:31:31.742 tests 23 23 23 0 0 00:31:31.742 asserts 130 130 130 0 n/a 00:31:31.742 00:31:31.742 Elapsed time = 0.587 seconds 00:31:31.742 0 00:31:31.742 13:16:50 -- bdev/blockdev.sh@293 -- # killprocess 145522 00:31:31.742 13:16:50 -- common/autotest_common.sh@926 -- # '[' -z 145522 ']' 00:31:31.742 13:16:50 -- common/autotest_common.sh@930 -- # kill -0 145522 00:31:31.742 13:16:50 -- common/autotest_common.sh@931 -- # uname 00:31:31.742 13:16:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:31.742 13:16:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145522 00:31:31.742 13:16:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:31.742 13:16:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:31.742 killing process with pid 145522 00:31:31.742 13:16:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145522' 00:31:31.742 13:16:50 -- common/autotest_common.sh@945 -- # kill 145522 00:31:31.742 13:16:50 -- common/autotest_common.sh@950 -- # wait 145522 00:31:33.646 13:16:52 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:33.646 00:31:33.646 real 0m3.144s 00:31:33.646 user 0m7.498s 00:31:33.646 sys 0m0.469s 00:31:33.646 13:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:33.646 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:31:33.646 ************************************ 00:31:33.646 END TEST bdev_bounds 00:31:33.646 ************************************ 00:31:33.646 13:16:52 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:33.646 13:16:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:31:33.646 13:16:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:33.646 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:31:33.646 ************************************ 00:31:33.646 START TEST bdev_nbd 00:31:33.646 ************************************ 00:31:33.646 13:16:52 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:33.646 13:16:52 -- bdev/blockdev.sh@298 -- # uname -s 00:31:33.646 13:16:52 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:33.646 13:16:52 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:33.646 13:16:52 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:33.646 13:16:52 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:31:33.646 13:16:52 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:33.646 13:16:52 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:31:33.646 13:16:52 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:33.646 13:16:52 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:31:33.646 13:16:52 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:33.646 13:16:52 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:31:33.646 13:16:52 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:31:33.646 13:16:52 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:33.646 13:16:52 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:31:33.646 13:16:52 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:33.646 13:16:52 -- bdev/blockdev.sh@316 -- # nbd_pid=145609 00:31:33.646 13:16:52 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:33.646 13:16:52 -- bdev/blockdev.sh@318 -- # waitforlisten 145609 /var/tmp/spdk-nbd.sock 00:31:33.646 13:16:52 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:33.646 13:16:52 -- common/autotest_common.sh@819 -- # '[' -z 145609 ']' 00:31:33.646 13:16:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:33.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:33.646 13:16:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:33.646 13:16:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:33.646 13:16:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:33.646 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:31:33.646 [2024-06-11 13:16:52.272595] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:33.646 [2024-06-11 13:16:52.272799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.646 [2024-06-11 13:16:52.442563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.904 [2024-06-11 13:16:52.637847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.470 13:16:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:34.470 13:16:53 -- common/autotest_common.sh@852 -- # return 0 00:31:34.470 13:16:53 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@24 -- # local i 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:34.470 13:16:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:31:34.728 13:16:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:34.728 13:16:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:34.728 13:16:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:34.728 13:16:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:34.728 13:16:53 -- common/autotest_common.sh@857 -- # local i 00:31:34.728 13:16:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:34.728 13:16:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:34.728 13:16:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:34.728 13:16:53 -- common/autotest_common.sh@861 -- # break 00:31:34.728 13:16:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:34.728 13:16:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:34.728 13:16:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:34.728 1+0 records in 00:31:34.728 1+0 records out 00:31:34.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273983 s, 14.9 MB/s 00:31:34.728 13:16:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:34.728 13:16:53 -- common/autotest_common.sh@874 -- # size=4096 00:31:34.728 13:16:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:34.728 13:16:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:34.728 13:16:53 -- common/autotest_common.sh@877 -- # return 0 00:31:34.728 13:16:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:34.728 13:16:53 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:34.728 13:16:53 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:35.010 { 00:31:35.010 "nbd_device": "/dev/nbd0", 00:31:35.010 "bdev_name": "raid5f" 00:31:35.010 } 00:31:35.010 ]' 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:35.010 { 00:31:35.010 "nbd_device": "/dev/nbd0", 00:31:35.010 "bdev_name": "raid5f" 00:31:35.010 } 00:31:35.010 ]' 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@51 -- # local i 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:35.010 13:16:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:35.273 13:16:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:35.273 13:16:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:35.273 13:16:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:35.273 13:16:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:35.273 13:16:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:35.273 13:16:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:35.273 13:16:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:35.273 13:16:54 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:35.273 13:16:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:35.273 13:16:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:35.273 13:16:54 -- bdev/nbd_common.sh@41 -- # break 00:31:35.273 13:16:54 -- bdev/nbd_common.sh@45 -- # return 0 00:31:35.273 13:16:54 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:35.273 13:16:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:35.273 13:16:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@65 -- # true 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@65 -- # count=0 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:35.531 13:16:54 -- bdev/nbd_common.sh@122 -- # count=0 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@127 -- # return 0 00:31:35.532 13:16:54 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@12 -- # local i 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:35.532 13:16:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:31:35.790 /dev/nbd0 00:31:35.790 13:16:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:35.790 13:16:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:35.790 13:16:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:35.790 13:16:54 -- common/autotest_common.sh@857 -- # local i 00:31:35.790 13:16:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:35.790 13:16:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:35.790 13:16:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:35.790 13:16:54 -- common/autotest_common.sh@861 -- # break 00:31:35.790 13:16:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:35.790 13:16:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:35.790 13:16:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:35.790 1+0 records in 00:31:35.790 1+0 records out 00:31:35.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293238 s, 14.0 MB/s 00:31:35.790 13:16:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:35.790 13:16:54 -- common/autotest_common.sh@874 -- # size=4096 00:31:35.790 13:16:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:35.790 13:16:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:35.790 13:16:54 -- common/autotest_common.sh@877 -- # return 0 00:31:35.790 13:16:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:35.790 13:16:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:35.790 13:16:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:35.790 13:16:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:35.790 13:16:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:36.358 { 00:31:36.358 "nbd_device": "/dev/nbd0", 00:31:36.358 "bdev_name": "raid5f" 00:31:36.358 } 00:31:36.358 ]' 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:36.358 { 00:31:36.358 "nbd_device": "/dev/nbd0", 00:31:36.358 "bdev_name": "raid5f" 00:31:36.358 } 00:31:36.358 ]' 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@65 -- # count=1 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@95 -- # count=1 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:36.358 256+0 records in 00:31:36.358 256+0 records out 00:31:36.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.007322 s, 143 MB/s 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:36.358 13:16:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:36.358 256+0 records in 00:31:36.358 256+0 records out 00:31:36.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298155 s, 35.2 MB/s 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@51 -- # local i 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:36.358 13:16:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@41 -- # break 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@45 -- # return 0 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:36.616 13:16:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:36.617 13:16:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@65 -- # true 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@65 -- # count=0 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@104 -- # count=0 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@109 -- # return 0 00:31:36.875 13:16:55 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:36.875 13:16:55 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:37.133 malloc_lvol_verify 00:31:37.133 13:16:55 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:37.392 4d986cf7-7e1f-4f9c-a4a3-c7102e8d7ef3 00:31:37.650 13:16:56 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:37.650 6e28051f-a3b5-4a2d-99c8-ecebe042d274 00:31:37.650 13:16:56 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:37.909 /dev/nbd0 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:37.909 mke2fs 1.45.5 (07-Jan-2020) 00:31:37.909 00:31:37.909 Filesystem too small for a journal 00:31:37.909 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:37.909 00:31:37.909 Allocating group tables: 0/1 done 00:31:37.909 Writing inode tables: 0/1 done 00:31:37.909 Writing superblocks and filesystem accounting information: 0/1 done 00:31:37.909 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@51 -- # local i 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:37.909 13:16:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:38.166 13:16:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:38.166 13:16:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:38.166 13:16:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:38.166 13:16:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:38.166 13:16:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:38.166 13:16:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:38.166 13:16:56 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:38.424 13:16:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:38.424 13:16:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:38.424 13:16:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:38.424 13:16:57 -- bdev/nbd_common.sh@41 -- # break 00:31:38.424 13:16:57 -- bdev/nbd_common.sh@45 -- # return 0 00:31:38.424 13:16:57 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:38.424 13:16:57 -- bdev/nbd_common.sh@147 -- # return 0 00:31:38.424 13:16:57 -- bdev/blockdev.sh@324 -- # killprocess 145609 00:31:38.424 13:16:57 -- common/autotest_common.sh@926 -- # '[' -z 145609 ']' 00:31:38.424 13:16:57 -- common/autotest_common.sh@930 -- # kill -0 145609 00:31:38.424 13:16:57 -- common/autotest_common.sh@931 -- # uname 00:31:38.424 13:16:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:38.424 13:16:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145609 00:31:38.424 13:16:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:38.424 killing process with pid 145609 00:31:38.424 13:16:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:38.424 13:16:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145609' 00:31:38.424 13:16:57 -- common/autotest_common.sh@945 -- # kill 145609 00:31:38.424 13:16:57 -- common/autotest_common.sh@950 -- # wait 145609 00:31:39.796 ************************************ 00:31:39.796 END TEST bdev_nbd 00:31:39.796 ************************************ 00:31:39.796 13:16:58 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:31:39.796 00:31:39.796 real 0m6.217s 00:31:39.796 user 0m8.762s 00:31:39.796 sys 0m1.206s 00:31:39.796 13:16:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.796 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.796 13:16:58 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:31:39.796 13:16:58 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:31:39.796 13:16:58 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:31:39.796 13:16:58 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:39.796 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.796 ************************************ 00:31:39.796 START TEST bdev_fio 00:31:39.796 ************************************ 00:31:39.796 13:16:58 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:31:39.796 13:16:58 -- bdev/blockdev.sh@329 -- # local env_context 00:31:39.796 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:31:39.796 13:16:58 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:31:39.796 13:16:58 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:39.796 13:16:58 -- bdev/blockdev.sh@337 -- # echo '' 00:31:39.796 13:16:58 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:31:39.796 13:16:58 -- bdev/blockdev.sh@337 -- # env_context= 00:31:39.796 13:16:58 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:39.796 13:16:58 -- common/autotest_common.sh@1260 -- # local workload=verify 00:31:39.796 13:16:58 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:31:39.796 13:16:58 -- common/autotest_common.sh@1262 -- # local env_context= 00:31:39.796 13:16:58 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:31:39.796 13:16:58 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:39.796 13:16:58 -- common/autotest_common.sh@1280 -- # cat 00:31:39.796 13:16:58 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1293 -- # cat 00:31:39.796 13:16:58 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:31:39.796 13:16:58 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:39.796 13:16:58 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:31:39.796 13:16:58 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:31:39.796 13:16:58 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:31:39.796 13:16:58 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:31:39.796 13:16:58 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:31:39.796 13:16:58 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:39.796 13:16:58 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:39.796 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.796 ************************************ 00:31:39.796 START TEST bdev_fio_rw_verify 00:31:39.796 ************************************ 00:31:39.796 13:16:58 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:39.796 13:16:58 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:39.796 13:16:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:39.796 13:16:58 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:31:39.796 13:16:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:39.796 13:16:58 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:39.796 13:16:58 -- common/autotest_common.sh@1320 -- # shift 00:31:39.796 13:16:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:39.796 13:16:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.796 13:16:58 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:39.796 13:16:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:39.796 13:16:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:39.796 13:16:58 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:31:39.796 13:16:58 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:31:39.797 13:16:58 -- common/autotest_common.sh@1326 -- # break 00:31:39.797 13:16:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:39.797 13:16:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:40.055 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:40.055 fio-3.35 00:31:40.055 Starting 1 thread 00:31:52.254 00:31:52.254 job_raid5f: (groupid=0, jobs=1): err= 0: pid=145857: Tue Jun 11 13:17:09 2024 00:31:52.254 read: IOPS=9676, BW=37.8MiB/s (39.6MB/s)(378MiB/10001msec) 00:31:52.254 slat (usec): min=18, max=113, avg=24.84, stdev= 5.52 00:31:52.254 clat (usec): min=12, max=402, avg=163.54, stdev=61.10 00:31:52.254 lat (usec): min=36, max=424, avg=188.38, stdev=61.99 00:31:52.254 clat percentiles (usec): 00:31:52.254 | 50.000th=[ 161], 99.000th=[ 293], 99.900th=[ 330], 99.990th=[ 367], 00:31:52.254 | 99.999th=[ 404] 00:31:52.254 write: IOPS=10.2k, BW=39.7MiB/s (41.7MB/s)(393MiB/9888msec); 0 zone resets 00:31:52.254 slat (usec): min=9, max=185, avg=21.57, stdev= 5.80 00:31:52.254 clat (usec): min=64, max=1083, avg=374.49, stdev=56.31 00:31:52.254 lat (usec): min=82, max=1269, avg=396.06, stdev=57.82 00:31:52.254 clat percentiles (usec): 00:31:52.254 | 50.000th=[ 375], 99.000th=[ 506], 99.900th=[ 578], 99.990th=[ 873], 00:31:52.254 | 99.999th=[ 1020] 00:31:52.254 bw ( KiB/s): min=37720, max=42416, per=98.91%, avg=40236.63, stdev=1706.68, samples=19 00:31:52.254 iops : min= 9430, max=10604, avg=10059.16, stdev=426.67, samples=19 00:31:52.254 lat (usec) : 20=0.01%, 50=0.01%, 100=9.45%, 250=35.35%, 500=54.57% 00:31:52.254 lat (usec) : 750=0.61%, 1000=0.01% 00:31:52.254 lat (msec) : 2=0.01% 00:31:52.254 cpu : usr=99.17%, sys=0.80%, ctx=86, majf=0, minf=6903 00:31:52.254 IO depths : 1=7.6%, 2=20.0%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.254 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.254 issued rwts: total=96775,100563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.254 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:52.254 00:31:52.254 Run status group 0 (all jobs): 00:31:52.254 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=378MiB (396MB), run=10001-10001msec 00:31:52.254 WRITE: bw=39.7MiB/s (41.7MB/s), 39.7MiB/s-39.7MiB/s (41.7MB/s-41.7MB/s), io=393MiB (412MB), run=9888-9888msec 00:31:52.513 ----------------------------------------------------- 00:31:52.513 Suppressions used: 00:31:52.513 count bytes template 00:31:52.513 1 7 /usr/src/fio/parse.c 00:31:52.513 766 73536 /usr/src/fio/iolog.c 00:31:52.513 2 596 libcrypto.so 00:31:52.513 ----------------------------------------------------- 00:31:52.513 00:31:52.513 00:31:52.513 real 0m12.661s 00:31:52.513 user 0m13.183s 00:31:52.513 sys 0m0.703s 00:31:52.513 13:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:52.513 13:17:11 -- common/autotest_common.sh@10 -- # set +x 00:31:52.513 ************************************ 00:31:52.513 END TEST bdev_fio_rw_verify 00:31:52.513 ************************************ 00:31:52.513 13:17:11 -- bdev/blockdev.sh@348 -- # rm -f 00:31:52.513 13:17:11 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:52.513 13:17:11 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:31:52.513 13:17:11 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:52.513 13:17:11 -- common/autotest_common.sh@1260 -- # local workload=trim 00:31:52.513 13:17:11 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:31:52.513 13:17:11 -- common/autotest_common.sh@1262 -- # local env_context= 00:31:52.513 13:17:11 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:31:52.513 13:17:11 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:52.513 13:17:11 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:31:52.513 13:17:11 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:31:52.513 13:17:11 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:52.513 13:17:11 -- common/autotest_common.sh@1280 -- # cat 00:31:52.513 13:17:11 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:31:52.513 13:17:11 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:31:52.513 13:17:11 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:31:52.513 13:17:11 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:52.513 13:17:11 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cb03fd7a-2ff9-4e0a-b9a3-13f76f8662f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cb03fd7a-2ff9-4e0a-b9a3-13f76f8662f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cb03fd7a-2ff9-4e0a-b9a3-13f76f8662f3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "a70c4d8b-e411-44c0-b488-788228d9f149",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "753b2811-4a84-47c4-9d16-11ac6032f564",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "24909e57-0e8f-47e1-87ff-2c5214959ccb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:52.513 13:17:11 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:31:52.513 13:17:11 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:52.513 13:17:11 -- bdev/blockdev.sh@360 -- # popd 00:31:52.513 /home/vagrant/spdk_repo/spdk 00:31:52.513 13:17:11 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:31:52.513 13:17:11 -- bdev/blockdev.sh@362 -- # return 0 00:31:52.513 00:31:52.513 real 0m12.828s 00:31:52.513 user 0m13.297s 00:31:52.513 sys 0m0.757s 00:31:52.513 13:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:52.513 ************************************ 00:31:52.513 END TEST bdev_fio 00:31:52.513 ************************************ 00:31:52.513 13:17:11 -- common/autotest_common.sh@10 -- # set +x 00:31:52.513 13:17:11 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:52.513 13:17:11 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:52.513 13:17:11 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:31:52.513 13:17:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:52.513 13:17:11 -- common/autotest_common.sh@10 -- # set +x 00:31:52.772 ************************************ 00:31:52.772 START TEST bdev_verify 00:31:52.772 ************************************ 00:31:52.772 13:17:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:52.772 [2024-06-11 13:17:11.415023] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:52.772 [2024-06-11 13:17:11.416038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146048 ] 00:31:52.772 [2024-06-11 13:17:11.584169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:53.047 [2024-06-11 13:17:11.853453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.047 [2024-06-11 13:17:11.853456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.614 Running I/O for 5 seconds... 00:31:58.878 00:31:58.878 Latency(us) 00:31:58.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.878 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:58.878 Verification LBA range: start 0x0 length 0x2000 00:31:58.878 raid5f : 5.02 7587.24 29.64 0.00 0.00 26743.64 161.98 20614.05 00:31:58.878 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:58.878 Verification LBA range: start 0x2000 length 0x2000 00:31:58.878 raid5f : 5.02 6666.84 26.04 0.00 0.00 30428.69 729.83 23473.80 00:31:58.878 =================================================================================================================== 00:31:58.878 Total : 14254.08 55.68 0.00 0.00 28466.98 161.98 23473.80 00:32:00.254 00:32:00.254 real 0m7.443s 00:32:00.254 user 0m13.470s 00:32:00.254 sys 0m0.361s 00:32:00.254 13:17:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.254 ************************************ 00:32:00.254 END TEST bdev_verify 00:32:00.254 ************************************ 00:32:00.254 13:17:18 -- common/autotest_common.sh@10 -- # set +x 00:32:00.254 13:17:18 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:00.254 13:17:18 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:00.254 13:17:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.254 13:17:18 -- common/autotest_common.sh@10 -- # set +x 00:32:00.254 ************************************ 00:32:00.254 START TEST bdev_verify_big_io 00:32:00.254 ************************************ 00:32:00.254 13:17:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:00.254 [2024-06-11 13:17:18.918547] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:00.255 [2024-06-11 13:17:18.918768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146162 ] 00:32:00.255 [2024-06-11 13:17:19.091660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:00.513 [2024-06-11 13:17:19.327217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.513 [2024-06-11 13:17:19.327224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.080 Running I/O for 5 seconds... 00:32:06.347 00:32:06.347 Latency(us) 00:32:06.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.347 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:06.347 Verification LBA range: start 0x0 length 0x200 00:32:06.347 raid5f : 5.20 497.74 31.11 0.00 0.00 6698069.65 240.17 207808.70 00:32:06.347 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:06.347 Verification LBA range: start 0x200 length 0x200 00:32:06.347 raid5f : 5.19 556.93 34.81 0.00 0.00 5983795.84 202.94 190650.18 00:32:06.347 =================================================================================================================== 00:32:06.347 Total : 1054.67 65.92 0.00 0.00 6321243.92 202.94 207808.70 00:32:07.723 00:32:07.723 real 0m7.665s 00:32:07.723 user 0m13.932s 00:32:07.723 sys 0m0.376s 00:32:07.723 13:17:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:07.723 ************************************ 00:32:07.723 END TEST bdev_verify_big_io 00:32:07.723 ************************************ 00:32:07.723 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.723 13:17:26 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:07.723 13:17:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:07.723 13:17:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.723 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:32:07.981 ************************************ 00:32:07.981 START TEST bdev_write_zeroes 00:32:07.981 ************************************ 00:32:07.981 13:17:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:07.981 [2024-06-11 13:17:26.627712] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:07.981 [2024-06-11 13:17:26.628107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146297 ] 00:32:07.981 [2024-06-11 13:17:26.798021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.239 [2024-06-11 13:17:26.999845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.805 Running I/O for 1 seconds... 00:32:09.739 00:32:09.739 Latency(us) 00:32:09.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.739 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:09.739 raid5f : 1.01 22532.36 88.02 0.00 0.00 5661.10 1541.59 7119.59 00:32:09.739 =================================================================================================================== 00:32:09.739 Total : 22532.36 88.02 0.00 0.00 5661.10 1541.59 7119.59 00:32:11.118 00:32:11.118 real 0m3.282s 00:32:11.118 user 0m2.870s 00:32:11.118 sys 0m0.296s 00:32:11.118 13:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.118 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.118 ************************************ 00:32:11.118 END TEST bdev_write_zeroes 00:32:11.118 ************************************ 00:32:11.118 13:17:29 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:11.118 13:17:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:11.118 13:17:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.118 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:32:11.118 ************************************ 00:32:11.118 START TEST bdev_json_nonenclosed 00:32:11.118 ************************************ 00:32:11.118 13:17:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:11.376 [2024-06-11 13:17:29.968819] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:11.376 [2024-06-11 13:17:29.969210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146354 ] 00:32:11.376 [2024-06-11 13:17:30.137932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.634 [2024-06-11 13:17:30.346986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.634 [2024-06-11 13:17:30.347209] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:11.634 [2024-06-11 13:17:30.347261] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:11.892 00:32:11.892 real 0m0.823s 00:32:11.892 user 0m0.611s 00:32:11.892 sys 0m0.112s 00:32:11.892 13:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.892 13:17:30 -- common/autotest_common.sh@10 -- # set +x 00:32:11.892 ************************************ 00:32:11.892 END TEST bdev_json_nonenclosed 00:32:11.892 ************************************ 00:32:12.150 13:17:30 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:12.150 13:17:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:12.150 13:17:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:12.150 13:17:30 -- common/autotest_common.sh@10 -- # set +x 00:32:12.150 ************************************ 00:32:12.150 START TEST bdev_json_nonarray 00:32:12.150 ************************************ 00:32:12.150 13:17:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:12.150 [2024-06-11 13:17:30.851025] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:12.150 [2024-06-11 13:17:30.851465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146392 ] 00:32:12.409 [2024-06-11 13:17:31.026580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.409 [2024-06-11 13:17:31.218298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.409 [2024-06-11 13:17:31.218532] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:12.409 [2024-06-11 13:17:31.218572] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:12.976 00:32:12.976 real 0m0.821s 00:32:12.976 user 0m0.585s 00:32:12.976 sys 0m0.136s 00:32:12.976 13:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.976 ************************************ 00:32:12.976 END TEST bdev_json_nonarray 00:32:12.976 ************************************ 00:32:12.976 13:17:31 -- common/autotest_common.sh@10 -- # set +x 00:32:12.976 13:17:31 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:32:12.976 13:17:31 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:32:12.976 13:17:31 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:32:12.976 13:17:31 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:32:12.976 13:17:31 -- bdev/blockdev.sh@809 -- # cleanup 00:32:12.976 13:17:31 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:12.976 13:17:31 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:12.976 13:17:31 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:32:12.976 13:17:31 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:32:12.976 13:17:31 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:32:12.976 13:17:31 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:32:12.976 00:32:12.976 real 0m49.585s 00:32:12.976 user 1m7.847s 00:32:12.976 sys 0m4.949s 00:32:12.976 13:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.976 13:17:31 -- common/autotest_common.sh@10 -- # set +x 00:32:12.976 ************************************ 00:32:12.976 END TEST blockdev_raid5f 00:32:12.976 ************************************ 00:32:12.976 13:17:31 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:12.976 13:17:31 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:12.976 13:17:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:12.976 13:17:31 -- common/autotest_common.sh@10 -- # set +x 00:32:12.976 13:17:31 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:12.976 13:17:31 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:12.976 13:17:31 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:12.976 13:17:31 -- common/autotest_common.sh@10 -- # set +x 00:32:14.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:14.351 Waiting for block devices as requested 00:32:14.609 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:14.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:14.868 Cleaning 00:32:14.868 Removing: /var/run/dpdk/spdk0/config 00:32:15.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:15.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:15.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:15.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:15.127 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:15.127 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:15.127 Removing: /dev/shm/spdk_tgt_trace.pid104997 00:32:15.127 Removing: /var/run/dpdk/spdk0 00:32:15.127 Removing: /var/run/dpdk/spdk_pid104756 00:32:15.127 Removing: /var/run/dpdk/spdk_pid104997 00:32:15.127 Removing: /var/run/dpdk/spdk_pid105315 00:32:15.127 Removing: /var/run/dpdk/spdk_pid105595 00:32:15.127 Removing: /var/run/dpdk/spdk_pid105765 00:32:15.127 Removing: /var/run/dpdk/spdk_pid105908 00:32:15.127 Removing: /var/run/dpdk/spdk_pid106010 00:32:15.127 Removing: /var/run/dpdk/spdk_pid106136 00:32:15.127 Removing: /var/run/dpdk/spdk_pid106260 00:32:15.127 Removing: /var/run/dpdk/spdk_pid106313 00:32:15.127 Removing: /var/run/dpdk/spdk_pid106363 00:32:15.127 Removing: /var/run/dpdk/spdk_pid106439 00:32:15.127 Removing: /var/run/dpdk/spdk_pid106583 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107152 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107234 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107338 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107368 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107525 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107560 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107720 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107748 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107818 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107849 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107934 00:32:15.127 Removing: /var/run/dpdk/spdk_pid107966 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108171 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108214 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108257 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108351 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108431 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108497 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108587 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108621 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108675 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108709 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108779 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108820 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108868 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108902 00:32:15.127 Removing: /var/run/dpdk/spdk_pid108972 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109006 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109060 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109095 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109146 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109208 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109255 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109289 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109343 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109400 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109448 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109487 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109534 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109586 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109640 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109679 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109727 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109780 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109833 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109869 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109923 00:32:15.127 Removing: /var/run/dpdk/spdk_pid109957 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110026 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110066 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110113 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110150 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110222 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110263 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110321 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110355 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110423 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110464 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110511 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110601 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110752 00:32:15.127 Removing: /var/run/dpdk/spdk_pid110931 00:32:15.127 Removing: /var/run/dpdk/spdk_pid111026 00:32:15.127 Removing: /var/run/dpdk/spdk_pid111095 00:32:15.127 Removing: /var/run/dpdk/spdk_pid112446 00:32:15.127 Removing: /var/run/dpdk/spdk_pid112688 00:32:15.127 Removing: /var/run/dpdk/spdk_pid112920 00:32:15.127 Removing: /var/run/dpdk/spdk_pid113055 00:32:15.127 Removing: /var/run/dpdk/spdk_pid113200 00:32:15.127 Removing: /var/run/dpdk/spdk_pid113289 00:32:15.127 Removing: /var/run/dpdk/spdk_pid113320 00:32:15.127 Removing: /var/run/dpdk/spdk_pid113358 00:32:15.127 Removing: /var/run/dpdk/spdk_pid113886 00:32:15.127 Removing: /var/run/dpdk/spdk_pid113973 00:32:15.127 Removing: /var/run/dpdk/spdk_pid114113 00:32:15.127 Removing: /var/run/dpdk/spdk_pid114177 00:32:15.127 Removing: /var/run/dpdk/spdk_pid115448 00:32:15.127 Removing: /var/run/dpdk/spdk_pid116379 00:32:15.127 Removing: /var/run/dpdk/spdk_pid117331 00:32:15.386 Removing: /var/run/dpdk/spdk_pid118505 00:32:15.386 Removing: /var/run/dpdk/spdk_pid119658 00:32:15.386 Removing: /var/run/dpdk/spdk_pid120779 00:32:15.386 Removing: /var/run/dpdk/spdk_pid122375 00:32:15.386 Removing: /var/run/dpdk/spdk_pid123658 00:32:15.386 Removing: /var/run/dpdk/spdk_pid124930 00:32:15.386 Removing: /var/run/dpdk/spdk_pid125632 00:32:15.386 Removing: /var/run/dpdk/spdk_pid126210 00:32:15.386 Removing: /var/run/dpdk/spdk_pid126896 00:32:15.386 Removing: /var/run/dpdk/spdk_pid127409 00:32:15.386 Removing: /var/run/dpdk/spdk_pid127995 00:32:15.386 Removing: /var/run/dpdk/spdk_pid128617 00:32:15.386 Removing: /var/run/dpdk/spdk_pid129312 00:32:15.386 Removing: /var/run/dpdk/spdk_pid129867 00:32:15.386 Removing: /var/run/dpdk/spdk_pid131346 00:32:15.386 Removing: /var/run/dpdk/spdk_pid131977 00:32:15.386 Removing: /var/run/dpdk/spdk_pid132556 00:32:15.386 Removing: /var/run/dpdk/spdk_pid134164 00:32:15.386 Removing: /var/run/dpdk/spdk_pid134869 00:32:15.386 Removing: /var/run/dpdk/spdk_pid135539 00:32:15.386 Removing: /var/run/dpdk/spdk_pid136357 00:32:15.386 Removing: /var/run/dpdk/spdk_pid136422 00:32:15.386 Removing: /var/run/dpdk/spdk_pid136474 00:32:15.386 Removing: /var/run/dpdk/spdk_pid136551 00:32:15.386 Removing: /var/run/dpdk/spdk_pid136682 00:32:15.386 Removing: /var/run/dpdk/spdk_pid136849 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137067 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137369 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137384 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137446 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137466 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137498 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137548 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137579 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137601 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137632 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137660 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137688 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137743 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137763 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137795 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137823 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137854 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137876 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137929 00:32:15.386 Removing: /var/run/dpdk/spdk_pid137957 00:32:15.387 Removing: /var/run/dpdk/spdk_pid137985 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138030 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138061 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138103 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138206 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138246 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138274 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138318 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138345 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138360 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138443 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138474 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138510 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138538 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138566 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138600 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138624 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138652 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138670 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138694 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138738 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138785 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138831 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138869 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138902 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138917 00:32:15.387 Removing: /var/run/dpdk/spdk_pid138981 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139027 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139068 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139096 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139120 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139141 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139166 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139202 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139230 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139255 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139339 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139448 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139589 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139643 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139696 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139760 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139793 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139827 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139872 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139921 00:32:15.387 Removing: /var/run/dpdk/spdk_pid139943 00:32:15.387 Removing: /var/run/dpdk/spdk_pid140031 00:32:15.387 Removing: /var/run/dpdk/spdk_pid140091 00:32:15.387 Removing: /var/run/dpdk/spdk_pid140169 00:32:15.387 Removing: /var/run/dpdk/spdk_pid140435 00:32:15.387 Removing: /var/run/dpdk/spdk_pid140558 00:32:15.387 Removing: /var/run/dpdk/spdk_pid140606 00:32:15.387 Removing: /var/run/dpdk/spdk_pid140700 00:32:15.646 Removing: /var/run/dpdk/spdk_pid140807 00:32:15.646 Removing: /var/run/dpdk/spdk_pid140845 00:32:15.646 Removing: /var/run/dpdk/spdk_pid141120 00:32:15.646 Removing: /var/run/dpdk/spdk_pid141294 00:32:15.646 Removing: /var/run/dpdk/spdk_pid141407 00:32:15.646 Removing: /var/run/dpdk/spdk_pid141475 00:32:15.646 Removing: /var/run/dpdk/spdk_pid141504 00:32:15.646 Removing: /var/run/dpdk/spdk_pid141589 00:32:15.646 Removing: /var/run/dpdk/spdk_pid142151 00:32:15.646 Removing: /var/run/dpdk/spdk_pid142201 00:32:15.646 Removing: /var/run/dpdk/spdk_pid142533 00:32:15.646 Removing: /var/run/dpdk/spdk_pid142682 00:32:15.646 Removing: /var/run/dpdk/spdk_pid142808 00:32:15.646 Removing: /var/run/dpdk/spdk_pid142868 00:32:15.646 Removing: /var/run/dpdk/spdk_pid142906 00:32:15.646 Removing: /var/run/dpdk/spdk_pid142940 00:32:15.646 Removing: /var/run/dpdk/spdk_pid144395 00:32:15.646 Removing: /var/run/dpdk/spdk_pid144556 00:32:15.646 Removing: /var/run/dpdk/spdk_pid144570 00:32:15.646 Removing: /var/run/dpdk/spdk_pid144587 00:32:15.646 Removing: /var/run/dpdk/spdk_pid145088 00:32:15.646 Removing: /var/run/dpdk/spdk_pid145211 00:32:15.646 Removing: /var/run/dpdk/spdk_pid145378 00:32:15.646 Removing: /var/run/dpdk/spdk_pid145478 00:32:15.646 Removing: /var/run/dpdk/spdk_pid145522 00:32:15.646 Removing: /var/run/dpdk/spdk_pid145837 00:32:15.646 Removing: /var/run/dpdk/spdk_pid146048 00:32:15.646 Removing: /var/run/dpdk/spdk_pid146162 00:32:15.646 Removing: /var/run/dpdk/spdk_pid146297 00:32:15.646 Removing: /var/run/dpdk/spdk_pid146354 00:32:15.646 Removing: /var/run/dpdk/spdk_pid146392 00:32:15.646 Clean 00:32:15.646 killing process with pid 93916 00:32:15.646 killing process with pid 93985 00:32:15.646 13:17:34 -- common/autotest_common.sh@1436 -- # return 0 00:32:15.646 13:17:34 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:15.646 13:17:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:15.646 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 13:17:34 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:15.904 13:17:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:15.904 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 13:17:34 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:15.904 13:17:34 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:15.904 13:17:34 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:15.904 13:17:34 -- spdk/autotest.sh@394 -- # hash lcov 00:32:15.904 13:17:34 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:15.904 13:17:34 -- spdk/autotest.sh@396 -- # hostname 00:32:15.904 13:17:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:15.904 geninfo: WARNING: invalid characters removed from testname! 00:33:02.595 13:18:18 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:05.877 13:18:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:09.252 13:18:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:12.538 13:18:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:15.832 13:18:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:19.115 13:18:37 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:22.400 13:18:40 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:22.400 13:18:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:22.400 13:18:40 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:22.400 13:18:40 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.400 13:18:40 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.400 13:18:40 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:22.400 13:18:40 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:22.400 13:18:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:22.400 13:18:40 -- paths/export.sh@5 -- $ export PATH 00:33:22.400 13:18:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:22.400 13:18:40 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:22.400 13:18:40 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:22.400 13:18:40 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718111920.XXXXXX 00:33:22.400 13:18:40 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718111920.0rmVYE 00:33:22.400 13:18:40 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:22.400 13:18:40 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:33:22.400 13:18:40 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:22.400 13:18:40 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:22.400 13:18:40 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:22.400 13:18:40 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:22.400 13:18:40 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:22.400 13:18:40 -- common/autotest_common.sh@10 -- $ set +x 00:33:22.400 13:18:40 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:22.400 13:18:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:22.400 13:18:40 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:22.400 13:18:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:22.400 13:18:40 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:22.400 13:18:40 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:22.400 13:18:40 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:33:22.400 13:18:40 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:33:22.400 13:18:40 -- common/autotest_common.sh@10 -- $ set +x 00:33:22.400 13:18:40 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:33:22.400 13:18:40 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:33:22.400 13:18:40 -- spdk/autopackage.sh@40 -- $ get_config_params 00:33:22.400 13:18:40 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:33:22.400 13:18:40 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:22.400 13:18:40 -- common/autotest_common.sh@10 -- $ set +x 00:33:22.400 13:18:40 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:22.400 13:18:40 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:33:22.400 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:22.400 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:22.400 Using 'verbs' RDMA provider 00:33:35.181 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:33:47.381 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:33:47.381 Creating mk/config.mk...done. 00:33:47.381 Creating mk/cc.flags.mk...done. 00:33:47.381 Type 'make' to build. 00:33:47.381 13:19:05 -- spdk/autopackage.sh@43 -- $ make -j10 00:33:47.381 make[1]: Nothing to be done for 'all'. 00:33:51.622 The Meson build system 00:33:51.622 Version: 1.4.0 00:33:51.622 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:33:51.622 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:33:51.622 Build type: native build 00:33:51.622 Program cat found: YES (/usr/bin/cat) 00:33:51.622 Project name: DPDK 00:33:51.622 Project version: 23.11.0 00:33:51.622 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:33:51.622 C linker for the host machine: cc ld.bfd 2.34 00:33:51.622 Host machine cpu family: x86_64 00:33:51.622 Host machine cpu: x86_64 00:33:51.622 Message: ## Building in Developer Mode ## 00:33:51.622 Program pkg-config found: YES (/usr/bin/pkg-config) 00:33:51.622 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:33:51.622 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:33:51.622 Program python3 found: YES (/usr/bin/python3) 00:33:51.622 Program cat found: YES (/usr/bin/cat) 00:33:51.622 Compiler for C supports arguments -march=native: YES 00:33:51.622 Checking for size of "void *" : 8 00:33:51.622 Checking for size of "void *" : 8 (cached) 00:33:51.622 Library m found: YES 00:33:51.622 Library numa found: YES 00:33:51.622 Has header "numaif.h" : YES 00:33:51.622 Library fdt found: NO 00:33:51.622 Library execinfo found: NO 00:33:51.622 Has header "execinfo.h" : YES 00:33:51.622 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:33:51.622 Run-time dependency libarchive found: NO (tried pkgconfig) 00:33:51.622 Run-time dependency libbsd found: NO (tried pkgconfig) 00:33:51.622 Run-time dependency jansson found: NO (tried pkgconfig) 00:33:51.622 Run-time dependency openssl found: YES 1.1.1f 00:33:51.622 Run-time dependency libpcap found: NO (tried pkgconfig) 00:33:51.622 Library pcap found: NO 00:33:51.622 Compiler for C supports arguments -Wcast-qual: YES 00:33:51.622 Compiler for C supports arguments -Wdeprecated: YES 00:33:51.622 Compiler for C supports arguments -Wformat: YES 00:33:51.622 Compiler for C supports arguments -Wformat-nonliteral: YES 00:33:51.622 Compiler for C supports arguments -Wformat-security: YES 00:33:51.622 Compiler for C supports arguments -Wmissing-declarations: YES 00:33:51.622 Compiler for C supports arguments -Wmissing-prototypes: YES 00:33:51.622 Compiler for C supports arguments -Wnested-externs: YES 00:33:51.622 Compiler for C supports arguments -Wold-style-definition: YES 00:33:51.622 Compiler for C supports arguments -Wpointer-arith: YES 00:33:51.622 Compiler for C supports arguments -Wsign-compare: YES 00:33:51.622 Compiler for C supports arguments -Wstrict-prototypes: YES 00:33:51.622 Compiler for C supports arguments -Wundef: YES 00:33:51.622 Compiler for C supports arguments -Wwrite-strings: YES 00:33:51.623 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:33:51.623 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:33:51.623 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:33:51.623 Program objdump found: YES (/usr/bin/objdump) 00:33:51.623 Compiler for C supports arguments -mavx512f: YES 00:33:51.623 Checking if "AVX512 checking" compiles: YES 00:33:51.623 Fetching value of define "__SSE4_2__" : 1 00:33:51.623 Fetching value of define "__AES__" : 1 00:33:51.623 Fetching value of define "__AVX__" : 1 00:33:51.623 Fetching value of define "__AVX2__" : 1 00:33:51.623 Fetching value of define "__AVX512BW__" : (undefined) 00:33:51.623 Fetching value of define "__AVX512CD__" : (undefined) 00:33:51.623 Fetching value of define "__AVX512DQ__" : (undefined) 00:33:51.623 Fetching value of define "__AVX512F__" : (undefined) 00:33:51.623 Fetching value of define "__AVX512VL__" : (undefined) 00:33:51.623 Fetching value of define "__PCLMUL__" : 1 00:33:51.623 Fetching value of define "__RDRND__" : 1 00:33:51.623 Fetching value of define "__RDSEED__" : 1 00:33:51.623 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:33:51.623 Fetching value of define "__znver1__" : (undefined) 00:33:51.623 Fetching value of define "__znver2__" : (undefined) 00:33:51.623 Fetching value of define "__znver3__" : (undefined) 00:33:51.623 Fetching value of define "__znver4__" : (undefined) 00:33:51.623 Compiler for C supports arguments -ffat-lto-objects: YES 00:33:51.623 Library asan found: YES 00:33:51.623 Compiler for C supports arguments -Wno-format-truncation: YES 00:33:51.623 Message: lib/log: Defining dependency "log" 00:33:51.623 Message: lib/kvargs: Defining dependency "kvargs" 00:33:51.623 Message: lib/telemetry: Defining dependency "telemetry" 00:33:51.623 Library rt found: YES 00:33:51.623 Checking for function "getentropy" : NO 00:33:51.623 Message: lib/eal: Defining dependency "eal" 00:33:51.623 Message: lib/ring: Defining dependency "ring" 00:33:51.623 Message: lib/rcu: Defining dependency "rcu" 00:33:51.623 Message: lib/mempool: Defining dependency "mempool" 00:33:51.623 Message: lib/mbuf: Defining dependency "mbuf" 00:33:51.623 Fetching value of define "__PCLMUL__" : 1 (cached) 00:33:51.623 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:33:51.623 Compiler for C supports arguments -mpclmul: YES 00:33:51.623 Compiler for C supports arguments -maes: YES 00:33:51.623 Compiler for C supports arguments -mavx512f: YES (cached) 00:33:51.623 Compiler for C supports arguments -mavx512bw: YES 00:33:51.623 Compiler for C supports arguments -mavx512dq: YES 00:33:51.623 Compiler for C supports arguments -mavx512vl: YES 00:33:51.623 Compiler for C supports arguments -mvpclmulqdq: YES 00:33:51.623 Compiler for C supports arguments -mavx2: YES 00:33:51.623 Compiler for C supports arguments -mavx: YES 00:33:51.623 Message: lib/net: Defining dependency "net" 00:33:51.623 Message: lib/meter: Defining dependency "meter" 00:33:51.623 Message: lib/ethdev: Defining dependency "ethdev" 00:33:51.623 Message: lib/pci: Defining dependency "pci" 00:33:51.623 Message: lib/cmdline: Defining dependency "cmdline" 00:33:51.623 Message: lib/hash: Defining dependency "hash" 00:33:51.623 Message: lib/timer: Defining dependency "timer" 00:33:51.623 Message: lib/compressdev: Defining dependency "compressdev" 00:33:51.623 Message: lib/cryptodev: Defining dependency "cryptodev" 00:33:51.623 Message: lib/dmadev: Defining dependency "dmadev" 00:33:51.623 Compiler for C supports arguments -Wno-cast-qual: YES 00:33:51.623 Message: lib/power: Defining dependency "power" 00:33:51.623 Message: lib/reorder: Defining dependency "reorder" 00:33:51.623 Message: lib/security: Defining dependency "security" 00:33:51.623 Has header "linux/userfaultfd.h" : YES 00:33:51.623 Has header "linux/vduse.h" : NO 00:33:51.623 Message: lib/vhost: Defining dependency "vhost" 00:33:51.623 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:33:51.623 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:33:51.623 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:33:51.623 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:33:51.623 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:33:51.623 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:33:51.623 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:33:51.623 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:33:51.623 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:33:51.623 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:33:51.623 Program doxygen found: YES (/usr/bin/doxygen) 00:33:51.623 Configuring doxy-api-html.conf using configuration 00:33:51.623 Configuring doxy-api-man.conf using configuration 00:33:51.623 Program mandb found: YES (/usr/bin/mandb) 00:33:51.623 Program sphinx-build found: NO 00:33:51.623 Configuring rte_build_config.h using configuration 00:33:51.623 Message: 00:33:51.623 ================= 00:33:51.623 Applications Enabled 00:33:51.623 ================= 00:33:51.623 00:33:51.623 apps: 00:33:51.623 00:33:51.623 00:33:51.623 Message: 00:33:51.623 ================= 00:33:51.623 Libraries Enabled 00:33:51.623 ================= 00:33:51.623 00:33:51.623 libs: 00:33:51.623 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:33:51.623 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:33:51.623 cryptodev, dmadev, power, reorder, security, vhost, 00:33:51.623 00:33:51.623 Message: 00:33:51.623 =============== 00:33:51.623 Drivers Enabled 00:33:51.623 =============== 00:33:51.623 00:33:51.623 common: 00:33:51.623 00:33:51.623 bus: 00:33:51.623 pci, vdev, 00:33:51.623 mempool: 00:33:51.623 ring, 00:33:51.623 dma: 00:33:51.623 00:33:51.623 net: 00:33:51.623 00:33:51.623 crypto: 00:33:51.623 00:33:51.623 compress: 00:33:51.623 00:33:51.623 vdpa: 00:33:51.623 00:33:51.623 00:33:51.623 Message: 00:33:51.623 ================= 00:33:51.623 Content Skipped 00:33:51.623 ================= 00:33:51.623 00:33:51.623 apps: 00:33:51.623 dumpcap: explicitly disabled via build config 00:33:51.623 graph: explicitly disabled via build config 00:33:51.623 pdump: explicitly disabled via build config 00:33:51.623 proc-info: explicitly disabled via build config 00:33:51.623 test-acl: explicitly disabled via build config 00:33:51.623 test-bbdev: explicitly disabled via build config 00:33:51.623 test-cmdline: explicitly disabled via build config 00:33:51.623 test-compress-perf: explicitly disabled via build config 00:33:51.623 test-crypto-perf: explicitly disabled via build config 00:33:51.623 test-dma-perf: explicitly disabled via build config 00:33:51.623 test-eventdev: explicitly disabled via build config 00:33:51.623 test-fib: explicitly disabled via build config 00:33:51.623 test-flow-perf: explicitly disabled via build config 00:33:51.623 test-gpudev: explicitly disabled via build config 00:33:51.623 test-mldev: explicitly disabled via build config 00:33:51.623 test-pipeline: explicitly disabled via build config 00:33:51.623 test-pmd: explicitly disabled via build config 00:33:51.623 test-regex: explicitly disabled via build config 00:33:51.623 test-sad: explicitly disabled via build config 00:33:51.623 test-security-perf: explicitly disabled via build config 00:33:51.623 00:33:51.623 libs: 00:33:51.623 metrics: explicitly disabled via build config 00:33:51.623 acl: explicitly disabled via build config 00:33:51.623 bbdev: explicitly disabled via build config 00:33:51.623 bitratestats: explicitly disabled via build config 00:33:51.623 bpf: explicitly disabled via build config 00:33:51.623 cfgfile: explicitly disabled via build config 00:33:51.623 distributor: explicitly disabled via build config 00:33:51.623 efd: explicitly disabled via build config 00:33:51.623 eventdev: explicitly disabled via build config 00:33:51.623 dispatcher: explicitly disabled via build config 00:33:51.623 gpudev: explicitly disabled via build config 00:33:51.623 gro: explicitly disabled via build config 00:33:51.623 gso: explicitly disabled via build config 00:33:51.623 ip_frag: explicitly disabled via build config 00:33:51.623 jobstats: explicitly disabled via build config 00:33:51.623 latencystats: explicitly disabled via build config 00:33:51.623 lpm: explicitly disabled via build config 00:33:51.623 member: explicitly disabled via build config 00:33:51.623 pcapng: explicitly disabled via build config 00:33:51.623 rawdev: explicitly disabled via build config 00:33:51.623 regexdev: explicitly disabled via build config 00:33:51.623 mldev: explicitly disabled via build config 00:33:51.623 rib: explicitly disabled via build config 00:33:51.623 sched: explicitly disabled via build config 00:33:51.623 stack: explicitly disabled via build config 00:33:51.623 ipsec: explicitly disabled via build config 00:33:51.623 pdcp: explicitly disabled via build config 00:33:51.623 fib: explicitly disabled via build config 00:33:51.623 port: explicitly disabled via build config 00:33:51.623 pdump: explicitly disabled via build config 00:33:51.623 table: explicitly disabled via build config 00:33:51.623 pipeline: explicitly disabled via build config 00:33:51.623 graph: explicitly disabled via build config 00:33:51.623 node: explicitly disabled via build config 00:33:51.623 00:33:51.623 drivers: 00:33:51.623 common/cpt: not in enabled drivers build config 00:33:51.623 common/dpaax: not in enabled drivers build config 00:33:51.624 common/iavf: not in enabled drivers build config 00:33:51.624 common/idpf: not in enabled drivers build config 00:33:51.624 common/mvep: not in enabled drivers build config 00:33:51.624 common/octeontx: not in enabled drivers build config 00:33:51.624 bus/auxiliary: not in enabled drivers build config 00:33:51.624 bus/cdx: not in enabled drivers build config 00:33:51.624 bus/dpaa: not in enabled drivers build config 00:33:51.624 bus/fslmc: not in enabled drivers build config 00:33:51.624 bus/ifpga: not in enabled drivers build config 00:33:51.624 bus/platform: not in enabled drivers build config 00:33:51.624 bus/vmbus: not in enabled drivers build config 00:33:51.624 common/cnxk: not in enabled drivers build config 00:33:51.624 common/mlx5: not in enabled drivers build config 00:33:51.624 common/nfp: not in enabled drivers build config 00:33:51.624 common/qat: not in enabled drivers build config 00:33:51.624 common/sfc_efx: not in enabled drivers build config 00:33:51.624 mempool/bucket: not in enabled drivers build config 00:33:51.624 mempool/cnxk: not in enabled drivers build config 00:33:51.624 mempool/dpaa: not in enabled drivers build config 00:33:51.624 mempool/dpaa2: not in enabled drivers build config 00:33:51.624 mempool/octeontx: not in enabled drivers build config 00:33:51.624 mempool/stack: not in enabled drivers build config 00:33:51.624 dma/cnxk: not in enabled drivers build config 00:33:51.624 dma/dpaa: not in enabled drivers build config 00:33:51.624 dma/dpaa2: not in enabled drivers build config 00:33:51.624 dma/hisilicon: not in enabled drivers build config 00:33:51.624 dma/idxd: not in enabled drivers build config 00:33:51.624 dma/ioat: not in enabled drivers build config 00:33:51.624 dma/skeleton: not in enabled drivers build config 00:33:51.624 net/af_packet: not in enabled drivers build config 00:33:51.624 net/af_xdp: not in enabled drivers build config 00:33:51.624 net/ark: not in enabled drivers build config 00:33:51.624 net/atlantic: not in enabled drivers build config 00:33:51.624 net/avp: not in enabled drivers build config 00:33:51.624 net/axgbe: not in enabled drivers build config 00:33:51.624 net/bnx2x: not in enabled drivers build config 00:33:51.624 net/bnxt: not in enabled drivers build config 00:33:51.624 net/bonding: not in enabled drivers build config 00:33:51.624 net/cnxk: not in enabled drivers build config 00:33:51.624 net/cpfl: not in enabled drivers build config 00:33:51.624 net/cxgbe: not in enabled drivers build config 00:33:51.624 net/dpaa: not in enabled drivers build config 00:33:51.624 net/dpaa2: not in enabled drivers build config 00:33:51.624 net/e1000: not in enabled drivers build config 00:33:51.624 net/ena: not in enabled drivers build config 00:33:51.624 net/enetc: not in enabled drivers build config 00:33:51.624 net/enetfec: not in enabled drivers build config 00:33:51.624 net/enic: not in enabled drivers build config 00:33:51.624 net/failsafe: not in enabled drivers build config 00:33:51.624 net/fm10k: not in enabled drivers build config 00:33:51.624 net/gve: not in enabled drivers build config 00:33:51.624 net/hinic: not in enabled drivers build config 00:33:51.624 net/hns3: not in enabled drivers build config 00:33:51.624 net/i40e: not in enabled drivers build config 00:33:51.624 net/iavf: not in enabled drivers build config 00:33:51.624 net/ice: not in enabled drivers build config 00:33:51.624 net/idpf: not in enabled drivers build config 00:33:51.624 net/igc: not in enabled drivers build config 00:33:51.624 net/ionic: not in enabled drivers build config 00:33:51.624 net/ipn3ke: not in enabled drivers build config 00:33:51.624 net/ixgbe: not in enabled drivers build config 00:33:51.624 net/mana: not in enabled drivers build config 00:33:51.624 net/memif: not in enabled drivers build config 00:33:51.624 net/mlx4: not in enabled drivers build config 00:33:51.624 net/mlx5: not in enabled drivers build config 00:33:51.624 net/mvneta: not in enabled drivers build config 00:33:51.624 net/mvpp2: not in enabled drivers build config 00:33:51.624 net/netvsc: not in enabled drivers build config 00:33:51.624 net/nfb: not in enabled drivers build config 00:33:51.624 net/nfp: not in enabled drivers build config 00:33:51.624 net/ngbe: not in enabled drivers build config 00:33:51.624 net/null: not in enabled drivers build config 00:33:51.624 net/octeontx: not in enabled drivers build config 00:33:51.624 net/octeon_ep: not in enabled drivers build config 00:33:51.624 net/pcap: not in enabled drivers build config 00:33:51.624 net/pfe: not in enabled drivers build config 00:33:51.624 net/qede: not in enabled drivers build config 00:33:51.624 net/ring: not in enabled drivers build config 00:33:51.624 net/sfc: not in enabled drivers build config 00:33:51.624 net/softnic: not in enabled drivers build config 00:33:51.624 net/tap: not in enabled drivers build config 00:33:51.624 net/thunderx: not in enabled drivers build config 00:33:51.624 net/txgbe: not in enabled drivers build config 00:33:51.624 net/vdev_netvsc: not in enabled drivers build config 00:33:51.624 net/vhost: not in enabled drivers build config 00:33:51.624 net/virtio: not in enabled drivers build config 00:33:51.624 net/vmxnet3: not in enabled drivers build config 00:33:51.624 raw/*: missing internal dependency, "rawdev" 00:33:51.624 crypto/armv8: not in enabled drivers build config 00:33:51.624 crypto/bcmfs: not in enabled drivers build config 00:33:51.624 crypto/caam_jr: not in enabled drivers build config 00:33:51.624 crypto/ccp: not in enabled drivers build config 00:33:51.624 crypto/cnxk: not in enabled drivers build config 00:33:51.624 crypto/dpaa_sec: not in enabled drivers build config 00:33:51.624 crypto/dpaa2_sec: not in enabled drivers build config 00:33:51.624 crypto/ipsec_mb: not in enabled drivers build config 00:33:51.624 crypto/mlx5: not in enabled drivers build config 00:33:51.624 crypto/mvsam: not in enabled drivers build config 00:33:51.624 crypto/nitrox: not in enabled drivers build config 00:33:51.624 crypto/null: not in enabled drivers build config 00:33:51.624 crypto/octeontx: not in enabled drivers build config 00:33:51.624 crypto/openssl: not in enabled drivers build config 00:33:51.624 crypto/scheduler: not in enabled drivers build config 00:33:51.624 crypto/uadk: not in enabled drivers build config 00:33:51.624 crypto/virtio: not in enabled drivers build config 00:33:51.624 compress/isal: not in enabled drivers build config 00:33:51.624 compress/mlx5: not in enabled drivers build config 00:33:51.624 compress/octeontx: not in enabled drivers build config 00:33:51.624 compress/zlib: not in enabled drivers build config 00:33:51.624 regex/*: missing internal dependency, "regexdev" 00:33:51.624 ml/*: missing internal dependency, "mldev" 00:33:51.624 vdpa/ifc: not in enabled drivers build config 00:33:51.624 vdpa/mlx5: not in enabled drivers build config 00:33:51.624 vdpa/nfp: not in enabled drivers build config 00:33:51.624 vdpa/sfc: not in enabled drivers build config 00:33:51.624 event/*: missing internal dependency, "eventdev" 00:33:51.624 baseband/*: missing internal dependency, "bbdev" 00:33:51.624 gpu/*: missing internal dependency, "gpudev" 00:33:51.624 00:33:51.624 00:33:52.192 Build targets in project: 85 00:33:52.192 00:33:52.192 DPDK 23.11.0 00:33:52.192 00:33:52.192 User defined options 00:33:52.192 default_library : static 00:33:52.192 libdir : lib 00:33:52.192 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:52.192 b_lto : true 00:33:52.192 b_sanitize : address 00:33:52.192 c_args : -fPIC -Werror 00:33:52.192 c_link_args : 00:33:52.192 cpu_instruction_set: native 00:33:52.192 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:33:52.192 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:33:52.192 enable_docs : false 00:33:52.192 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:33:52.192 enable_kmods : false 00:33:52.192 tests : false 00:33:52.192 00:33:52.192 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:33:52.760 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:33:52.760 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:33:52.760 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:33:52.760 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:33:52.760 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:33:52.760 [5/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:33:53.018 [6/264] Linking static target lib/librte_kvargs.a 00:33:53.018 [7/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:33:53.018 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:33:53.018 [9/264] Linking static target lib/librte_log.a 00:33:53.018 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:33:53.276 [11/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:33:53.276 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:33:53.276 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:33:53.276 [14/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:33:53.276 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:33:53.276 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:33:53.533 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:33:53.533 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:33:53.792 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:33:53.792 [20/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:33:53.792 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:33:53.792 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:33:53.792 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:33:53.792 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:33:54.056 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:33:54.056 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:33:54.056 [27/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:33:54.056 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:33:54.056 [29/264] Linking static target lib/librte_telemetry.a 00:33:54.056 [30/264] Linking target lib/librte_log.so.24.0 00:33:54.314 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:33:54.314 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:33:54.314 [33/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:33:54.314 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:33:54.314 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:33:54.314 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:33:54.573 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:33:54.573 [38/264] Linking target lib/librte_kvargs.so.24.0 00:33:54.573 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:33:54.573 [40/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:33:54.573 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:33:54.573 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:33:54.573 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:33:54.833 [44/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:33:55.092 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:33:55.092 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:33:55.092 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:33:55.092 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:33:55.092 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:33:55.092 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:33:55.092 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:33:55.351 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:33:55.351 [53/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:33:55.351 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:33:55.351 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:33:55.351 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:33:55.351 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:33:55.351 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:33:55.610 [59/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:33:55.610 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:33:55.610 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:33:55.610 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:33:55.610 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:33:55.610 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:33:55.610 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:33:55.610 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:33:55.868 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:33:56.126 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:33:56.126 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:33:56.126 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:33:56.126 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:33:56.126 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:33:56.126 [73/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:33:56.126 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:33:56.126 [75/264] Linking target lib/librte_telemetry.so.24.0 00:33:56.126 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:33:56.385 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:33:56.385 [78/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:33:56.385 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:33:56.643 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:33:56.643 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:33:56.643 [82/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:33:56.643 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:33:56.643 [84/264] Linking static target lib/librte_ring.a 00:33:56.643 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:33:56.643 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:33:56.902 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:33:56.902 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:33:56.902 [89/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.161 [90/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:33:57.161 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:33:57.161 [92/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:33:57.161 [93/264] Linking static target lib/librte_eal.a 00:33:57.418 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:33:57.418 [95/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:33:57.418 [96/264] Linking static target lib/librte_mempool.a 00:33:57.418 [97/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:33:57.418 [98/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:33:57.418 [99/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:33:57.677 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:33:57.677 [101/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:33:57.677 [102/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:33:57.677 [103/264] Linking static target lib/librte_rcu.a 00:33:57.677 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:33:57.677 [105/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:33:57.677 [106/264] Linking static target lib/librte_net.a 00:33:57.677 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:33:57.677 [108/264] Linking static target lib/librte_meter.a 00:33:57.935 [109/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.935 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:33:57.935 [111/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.935 [112/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.935 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:33:58.193 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:33:58.193 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:33:58.467 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:33:58.467 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:33:58.725 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:33:58.725 [119/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:33:58.725 [120/264] Linking static target lib/librte_mbuf.a 00:33:59.019 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:33:59.019 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:33:59.328 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:33:59.328 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:33:59.328 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:33:59.328 [126/264] Linking static target lib/librte_pci.a 00:33:59.328 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:33:59.328 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:33:59.328 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:33:59.586 [130/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.586 [131/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.586 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:33:59.586 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:33:59.586 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:33:59.845 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:33:59.845 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:33:59.845 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:33:59.845 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:33:59.845 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:33:59.845 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:33:59.845 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:33:59.845 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:33:59.845 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:34:00.104 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:34:00.104 [145/264] Linking static target lib/librte_cmdline.a 00:34:00.104 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:34:00.363 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:34:00.622 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:34:00.622 [149/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:34:00.622 [150/264] Linking static target lib/librte_timer.a 00:34:00.622 [151/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:34:00.880 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:34:00.880 [153/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:34:00.880 [154/264] Linking static target lib/librte_compressdev.a 00:34:00.880 [155/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:34:00.880 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:34:01.139 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:34:01.139 [158/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:34:01.139 [159/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.139 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:34:01.398 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:34:01.398 [162/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.398 [163/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:34:01.657 [164/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:34:01.657 [165/264] Linking static target lib/librte_dmadev.a 00:34:01.657 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:34:01.917 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:34:01.917 [168/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:34:01.917 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:34:02.176 [170/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:34:02.176 [171/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:02.176 [172/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:34:02.176 [173/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:34:02.435 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:34:02.695 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:34:02.695 [176/264] Linking static target lib/librte_power.a 00:34:02.695 [177/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:34:02.695 [178/264] Linking static target lib/librte_reorder.a 00:34:02.695 [179/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:34:02.954 [180/264] Linking static target lib/librte_security.a 00:34:02.954 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:34:02.954 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:34:02.954 [183/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:34:03.212 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:34:03.212 [185/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:34:03.212 [186/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:34:03.778 [187/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:34:03.778 [188/264] Linking static target lib/librte_cryptodev.a 00:34:03.778 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:34:03.778 [190/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:34:04.036 [191/264] Linking static target lib/librte_ethdev.a 00:34:04.036 [192/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:34:04.036 [193/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:34:04.603 [194/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:34:04.603 [195/264] Linking static target lib/librte_hash.a 00:34:04.603 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:34:04.603 [197/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:34:04.861 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:34:04.861 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:34:05.119 [200/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:34:05.119 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:34:05.119 [202/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:34:05.119 [203/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:05.686 [204/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:34:05.686 [205/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:34:05.686 [206/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:34:05.686 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:34:05.686 [208/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:34:05.686 [209/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:34:05.686 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:05.686 [211/264] Linking static target drivers/librte_bus_vdev.a 00:34:05.686 [212/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:05.686 [213/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:34:05.944 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:05.944 [215/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:05.944 [216/264] Linking static target drivers/librte_bus_pci.a 00:34:05.944 [217/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:05.944 [218/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:34:05.944 [219/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:34:06.203 [220/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:34:06.203 [221/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:06.203 [222/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:06.203 [223/264] Linking static target drivers/librte_mempool_ring.a 00:34:06.203 [224/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:34:10.393 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:16.953 [226/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:34:16.953 [227/264] Linking target lib/librte_eal.so.24.0 00:34:16.953 [228/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:34:16.953 [229/264] Linking target lib/librte_meter.so.24.0 00:34:16.953 [230/264] Linking target lib/librte_pci.so.24.0 00:34:16.953 [231/264] Linking target lib/librte_ring.so.24.0 00:34:16.953 [232/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:34:16.953 [233/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:34:16.953 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:34:16.953 [235/264] Linking target drivers/librte_bus_vdev.so.24.0 00:34:17.212 [236/264] Linking target lib/librte_timer.so.24.0 00:34:17.212 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:34:17.471 [238/264] Linking target lib/librte_dmadev.so.24.0 00:34:17.471 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:34:18.039 [240/264] Linking target lib/librte_mempool.so.24.0 00:34:18.039 [241/264] Linking target lib/librte_rcu.so.24.0 00:34:18.039 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:34:18.039 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:34:18.606 [244/264] Linking target drivers/librte_bus_pci.so.24.0 00:34:18.606 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:34:19.982 [246/264] Linking target lib/librte_mbuf.so.24.0 00:34:20.240 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:34:20.807 [248/264] Linking target lib/librte_reorder.so.24.0 00:34:20.807 [249/264] Linking target lib/librte_compressdev.so.24.0 00:34:21.373 [250/264] Linking target lib/librte_net.so.24.0 00:34:21.374 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:34:22.748 [252/264] Linking target lib/librte_cmdline.so.24.0 00:34:22.749 [253/264] Linking target lib/librte_cryptodev.so.24.0 00:34:23.007 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:34:23.572 [255/264] Linking target lib/librte_security.so.24.0 00:34:26.864 [256/264] Linking target lib/librte_hash.so.24.0 00:34:26.864 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:34:34.976 [258/264] Linking target lib/librte_ethdev.so.24.0 00:34:34.976 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:34:37.509 [260/264] Linking target lib/librte_power.so.24.0 00:34:44.097 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:34:44.097 [262/264] Linking static target lib/librte_vhost.a 00:34:45.473 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:35:41.697 [264/264] Linking target lib/librte_vhost.so.24.0 00:35:41.697 INFO: autodetecting backend as ninja 00:35:41.697 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:35:41.697 CC lib/ut/ut.o 00:35:41.697 CC lib/log/log_flags.o 00:35:41.697 CC lib/log/log_deprecated.o 00:35:41.697 CC lib/log/log.o 00:35:41.697 CC lib/ut_mock/mock.o 00:35:41.697 LIB libspdk_ut_mock.a 00:35:41.697 LIB libspdk_ut.a 00:35:41.697 LIB libspdk_log.a 00:35:41.697 CC lib/ioat/ioat.o 00:35:41.697 CC lib/util/base64.o 00:35:41.697 CC lib/util/bit_array.o 00:35:41.697 CC lib/util/cpuset.o 00:35:41.697 CC lib/dma/dma.o 00:35:41.697 CC lib/util/crc16.o 00:35:41.697 CC lib/util/crc32.o 00:35:41.697 CC lib/util/crc32c.o 00:35:41.697 CXX lib/trace_parser/trace.o 00:35:41.697 CC lib/vfio_user/host/vfio_user_pci.o 00:35:41.697 CC lib/vfio_user/host/vfio_user.o 00:35:41.697 CC lib/util/crc32_ieee.o 00:35:41.697 CC lib/util/crc64.o 00:35:41.697 CC lib/util/dif.o 00:35:41.697 LIB libspdk_dma.a 00:35:41.697 CC lib/util/fd.o 00:35:41.697 CC lib/util/file.o 00:35:41.697 CC lib/util/hexlify.o 00:35:41.697 LIB libspdk_ioat.a 00:35:41.697 CC lib/util/iov.o 00:35:41.697 CC lib/util/math.o 00:35:41.697 CC lib/util/pipe.o 00:35:41.697 CC lib/util/strerror_tls.o 00:35:41.697 LIB libspdk_vfio_user.a 00:35:41.697 CC lib/util/string.o 00:35:41.697 CC lib/util/uuid.o 00:35:41.697 CC lib/util/fd_group.o 00:35:41.697 CC lib/util/xor.o 00:35:41.697 CC lib/util/zipf.o 00:35:41.697 LIB libspdk_util.a 00:35:41.956 CC lib/json/json_parse.o 00:35:41.956 CC lib/json/json_util.o 00:35:41.956 CC lib/json/json_write.o 00:35:41.956 CC lib/rdma/common.o 00:35:41.956 CC lib/rdma/rdma_verbs.o 00:35:41.956 CC lib/env_dpdk/env.o 00:35:41.956 CC lib/conf/conf.o 00:35:41.956 CC lib/idxd/idxd.o 00:35:41.956 CC lib/vmd/vmd.o 00:35:41.956 LIB libspdk_trace_parser.a 00:35:41.956 CC lib/vmd/led.o 00:35:41.956 CC lib/env_dpdk/memory.o 00:35:41.956 LIB libspdk_conf.a 00:35:41.956 CC lib/idxd/idxd_user.o 00:35:41.956 CC lib/env_dpdk/pci.o 00:35:41.956 CC lib/env_dpdk/init.o 00:35:41.956 LIB libspdk_json.a 00:35:42.215 LIB libspdk_rdma.a 00:35:42.215 CC lib/env_dpdk/threads.o 00:35:42.216 CC lib/env_dpdk/pci_ioat.o 00:35:42.216 CC lib/env_dpdk/pci_virtio.o 00:35:42.216 CC lib/env_dpdk/pci_vmd.o 00:35:42.216 LIB libspdk_vmd.a 00:35:42.216 CC lib/env_dpdk/pci_idxd.o 00:35:42.216 CC lib/env_dpdk/pci_event.o 00:35:42.216 LIB libspdk_idxd.a 00:35:42.216 CC lib/env_dpdk/sigbus_handler.o 00:35:42.216 CC lib/env_dpdk/pci_dpdk.o 00:35:42.216 CC lib/env_dpdk/pci_dpdk_2207.o 00:35:42.216 CC lib/env_dpdk/pci_dpdk_2211.o 00:35:42.474 CC lib/jsonrpc/jsonrpc_server.o 00:35:42.474 CC lib/jsonrpc/jsonrpc_client.o 00:35:42.474 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:35:42.474 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:35:42.733 LIB libspdk_jsonrpc.a 00:35:42.733 CC lib/rpc/rpc.o 00:35:42.733 LIB libspdk_env_dpdk.a 00:35:42.992 LIB libspdk_rpc.a 00:35:42.992 CC lib/notify/notify.o 00:35:42.992 CC lib/notify/notify_rpc.o 00:35:42.993 CC lib/trace/trace.o 00:35:42.993 CC lib/trace/trace_rpc.o 00:35:42.993 CC lib/trace/trace_flags.o 00:35:42.993 CC lib/sock/sock.o 00:35:42.993 CC lib/sock/sock_rpc.o 00:35:43.252 LIB libspdk_notify.a 00:35:43.252 LIB libspdk_trace.a 00:35:43.252 LIB libspdk_sock.a 00:35:43.252 CC lib/thread/thread.o 00:35:43.252 CC lib/thread/iobuf.o 00:35:43.510 CC lib/nvme/nvme_ctrlr_cmd.o 00:35:43.510 CC lib/nvme/nvme_fabric.o 00:35:43.510 CC lib/nvme/nvme_ctrlr.o 00:35:43.510 CC lib/nvme/nvme_ns_cmd.o 00:35:43.510 CC lib/nvme/nvme_ns.o 00:35:43.510 CC lib/nvme/nvme_pcie.o 00:35:43.510 CC lib/nvme/nvme_pcie_common.o 00:35:43.510 CC lib/nvme/nvme_qpair.o 00:35:43.510 CC lib/nvme/nvme.o 00:35:43.768 CC lib/nvme/nvme_quirks.o 00:35:44.028 CC lib/nvme/nvme_transport.o 00:35:44.028 CC lib/nvme/nvme_discovery.o 00:35:44.028 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:35:44.028 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:35:44.028 LIB libspdk_thread.a 00:35:44.028 CC lib/nvme/nvme_tcp.o 00:35:44.028 CC lib/nvme/nvme_opal.o 00:35:44.028 CC lib/nvme/nvme_io_msg.o 00:35:44.287 CC lib/nvme/nvme_poll_group.o 00:35:44.287 CC lib/nvme/nvme_zns.o 00:35:44.545 CC lib/nvme/nvme_cuse.o 00:35:44.545 CC lib/accel/accel.o 00:35:44.545 CC lib/blob/blobstore.o 00:35:44.545 CC lib/init/json_config.o 00:35:44.545 CC lib/init/subsystem.o 00:35:44.545 CC lib/init/rpc.o 00:35:44.545 CC lib/init/subsystem_rpc.o 00:35:44.545 CC lib/virtio/virtio.o 00:35:44.804 CC lib/virtio/virtio_vhost_user.o 00:35:44.804 CC lib/virtio/virtio_vfio_user.o 00:35:44.804 CC lib/virtio/virtio_pci.o 00:35:44.804 CC lib/nvme/nvme_vfio_user.o 00:35:44.804 LIB libspdk_init.a 00:35:44.804 CC lib/accel/accel_rpc.o 00:35:44.804 CC lib/accel/accel_sw.o 00:35:44.804 CC lib/blob/request.o 00:35:45.062 CC lib/blob/zeroes.o 00:35:45.062 LIB libspdk_virtio.a 00:35:45.062 CC lib/nvme/nvme_rdma.o 00:35:45.062 CC lib/blob/blob_bs_dev.o 00:35:45.062 CC lib/event/reactor.o 00:35:45.062 CC lib/event/app.o 00:35:45.062 CC lib/event/app_rpc.o 00:35:45.062 CC lib/event/scheduler_static.o 00:35:45.062 CC lib/event/log_rpc.o 00:35:45.062 LIB libspdk_accel.a 00:35:45.321 CC lib/bdev/bdev.o 00:35:45.321 CC lib/bdev/bdev_zone.o 00:35:45.321 CC lib/bdev/bdev_rpc.o 00:35:45.321 CC lib/bdev/part.o 00:35:45.321 CC lib/bdev/scsi_nvme.o 00:35:45.321 LIB libspdk_event.a 00:35:45.887 LIB libspdk_nvme.a 00:35:46.148 LIB libspdk_blob.a 00:35:46.407 CC lib/lvol/lvol.o 00:35:46.407 CC lib/blobfs/tree.o 00:35:46.407 CC lib/blobfs/blobfs.o 00:35:46.666 LIB libspdk_bdev.a 00:35:46.666 LIB libspdk_blobfs.a 00:35:46.924 LIB libspdk_lvol.a 00:35:46.924 CC lib/nbd/nbd_rpc.o 00:35:46.924 CC lib/nbd/nbd.o 00:35:46.924 CC lib/nvmf/ctrlr.o 00:35:46.924 CC lib/nvmf/ctrlr_discovery.o 00:35:46.924 CC lib/nvmf/ctrlr_bdev.o 00:35:46.924 CC lib/scsi/lun.o 00:35:46.924 CC lib/scsi/dev.o 00:35:46.924 CC lib/nvmf/subsystem.o 00:35:46.924 CC lib/scsi/port.o 00:35:46.924 CC lib/ftl/ftl_core.o 00:35:46.924 CC lib/ftl/ftl_init.o 00:35:46.924 CC lib/ftl/ftl_layout.o 00:35:47.184 CC lib/ftl/ftl_debug.o 00:35:47.184 CC lib/ftl/ftl_io.o 00:35:47.184 CC lib/ftl/ftl_sb.o 00:35:47.184 CC lib/scsi/scsi.o 00:35:47.184 CC lib/scsi/scsi_bdev.o 00:35:47.184 CC lib/scsi/scsi_pr.o 00:35:47.184 LIB libspdk_nbd.a 00:35:47.184 CC lib/ftl/ftl_l2p.o 00:35:47.184 CC lib/nvmf/nvmf.o 00:35:47.184 CC lib/nvmf/nvmf_rpc.o 00:35:47.442 CC lib/nvmf/transport.o 00:35:47.442 CC lib/nvmf/tcp.o 00:35:47.442 CC lib/nvmf/rdma.o 00:35:47.442 CC lib/scsi/scsi_rpc.o 00:35:47.442 CC lib/ftl/ftl_l2p_flat.o 00:35:47.442 CC lib/ftl/ftl_nv_cache.o 00:35:47.442 CC lib/ftl/ftl_band.o 00:35:47.701 CC lib/scsi/task.o 00:35:47.701 CC lib/ftl/ftl_band_ops.o 00:35:47.701 CC lib/ftl/ftl_rq.o 00:35:47.701 CC lib/ftl/ftl_writer.o 00:35:47.701 CC lib/ftl/ftl_reloc.o 00:35:47.701 CC lib/ftl/ftl_l2p_cache.o 00:35:47.701 LIB libspdk_scsi.a 00:35:47.701 CC lib/ftl/ftl_p2l.o 00:35:47.701 CC lib/ftl/mngt/ftl_mngt.o 00:35:47.959 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:35:47.959 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:35:47.959 CC lib/ftl/mngt/ftl_mngt_startup.o 00:35:47.959 CC lib/iscsi/conn.o 00:35:47.959 CC lib/ftl/mngt/ftl_mngt_md.o 00:35:47.959 CC lib/iscsi/init_grp.o 00:35:47.959 CC lib/iscsi/iscsi.o 00:35:47.959 CC lib/iscsi/md5.o 00:35:47.959 CC lib/iscsi/param.o 00:35:48.218 CC lib/ftl/mngt/ftl_mngt_misc.o 00:35:48.218 CC lib/iscsi/portal_grp.o 00:35:48.218 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:35:48.218 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:35:48.218 CC lib/iscsi/tgt_node.o 00:35:48.218 CC lib/iscsi/iscsi_subsystem.o 00:35:48.218 CC lib/iscsi/iscsi_rpc.o 00:35:48.218 CC lib/iscsi/task.o 00:35:48.477 CC lib/ftl/mngt/ftl_mngt_band.o 00:35:48.477 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:35:48.477 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:35:48.477 LIB libspdk_nvmf.a 00:35:48.477 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:35:48.477 CC lib/vhost/vhost.o 00:35:48.477 CC lib/vhost/vhost_rpc.o 00:35:48.477 CC lib/vhost/vhost_scsi.o 00:35:48.477 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:35:48.477 CC lib/vhost/vhost_blk.o 00:35:48.477 CC lib/vhost/rte_vhost_user.o 00:35:48.477 CC lib/ftl/utils/ftl_conf.o 00:35:48.736 CC lib/ftl/utils/ftl_md.o 00:35:48.736 CC lib/ftl/utils/ftl_mempool.o 00:35:48.736 CC lib/ftl/utils/ftl_bitmap.o 00:35:48.736 CC lib/ftl/utils/ftl_property.o 00:35:48.736 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:35:48.994 LIB libspdk_iscsi.a 00:35:48.994 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:35:48.994 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:35:48.994 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:35:48.994 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:35:48.995 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:35:48.995 CC lib/ftl/upgrade/ftl_sb_v3.o 00:35:48.995 CC lib/ftl/upgrade/ftl_sb_v5.o 00:35:48.995 CC lib/ftl/nvc/ftl_nvc_dev.o 00:35:48.995 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:35:48.995 CC lib/ftl/base/ftl_base_dev.o 00:35:49.253 CC lib/ftl/base/ftl_base_bdev.o 00:35:49.254 LIB libspdk_ftl.a 00:35:49.254 LIB libspdk_vhost.a 00:35:49.513 CC module/env_dpdk/env_dpdk_rpc.o 00:35:49.513 CC module/blob/bdev/blob_bdev.o 00:35:49.513 CC module/accel/error/accel_error.o 00:35:49.513 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:35:49.513 CC module/scheduler/gscheduler/gscheduler.o 00:35:49.513 CC module/accel/iaa/accel_iaa.o 00:35:49.513 CC module/scheduler/dynamic/scheduler_dynamic.o 00:35:49.513 CC module/accel/dsa/accel_dsa.o 00:35:49.513 CC module/sock/posix/posix.o 00:35:49.513 CC module/accel/ioat/accel_ioat.o 00:35:49.513 LIB libspdk_env_dpdk_rpc.a 00:35:49.513 CC module/accel/dsa/accel_dsa_rpc.o 00:35:49.773 LIB libspdk_scheduler_gscheduler.a 00:35:49.773 LIB libspdk_scheduler_dpdk_governor.a 00:35:49.773 CC module/accel/error/accel_error_rpc.o 00:35:49.773 CC module/accel/ioat/accel_ioat_rpc.o 00:35:49.773 LIB libspdk_scheduler_dynamic.a 00:35:49.773 CC module/accel/iaa/accel_iaa_rpc.o 00:35:49.773 LIB libspdk_blob_bdev.a 00:35:49.773 LIB libspdk_accel_dsa.a 00:35:49.773 LIB libspdk_accel_ioat.a 00:35:49.773 LIB libspdk_accel_error.a 00:35:49.773 LIB libspdk_accel_iaa.a 00:35:49.773 CC module/bdev/delay/vbdev_delay.o 00:35:49.773 CC module/bdev/lvol/vbdev_lvol.o 00:35:49.773 CC module/bdev/malloc/bdev_malloc.o 00:35:49.773 CC module/bdev/error/vbdev_error.o 00:35:49.773 CC module/blobfs/bdev/blobfs_bdev.o 00:35:49.773 CC module/bdev/gpt/gpt.o 00:35:50.032 CC module/bdev/null/bdev_null.o 00:35:50.032 CC module/bdev/nvme/bdev_nvme.o 00:35:50.032 CC module/bdev/passthru/vbdev_passthru.o 00:35:50.032 LIB libspdk_sock_posix.a 00:35:50.032 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:35:50.032 CC module/bdev/null/bdev_null_rpc.o 00:35:50.032 CC module/bdev/gpt/vbdev_gpt.o 00:35:50.032 CC module/bdev/error/vbdev_error_rpc.o 00:35:50.032 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:35:50.032 CC module/bdev/delay/vbdev_delay_rpc.o 00:35:50.032 CC module/bdev/malloc/bdev_malloc_rpc.o 00:35:50.290 LIB libspdk_blobfs_bdev.a 00:35:50.290 LIB libspdk_bdev_null.a 00:35:50.290 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:35:50.290 LIB libspdk_bdev_error.a 00:35:50.290 LIB libspdk_bdev_passthru.a 00:35:50.291 LIB libspdk_bdev_delay.a 00:35:50.291 LIB libspdk_bdev_malloc.a 00:35:50.291 CC module/bdev/split/vbdev_split.o 00:35:50.291 CC module/bdev/nvme/bdev_nvme_rpc.o 00:35:50.291 CC module/bdev/raid/bdev_raid.o 00:35:50.291 CC module/bdev/zone_block/vbdev_zone_block.o 00:35:50.291 LIB libspdk_bdev_gpt.a 00:35:50.291 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:35:50.291 CC module/bdev/aio/bdev_aio.o 00:35:50.291 CC module/bdev/ftl/bdev_ftl.o 00:35:50.549 CC module/bdev/iscsi/bdev_iscsi.o 00:35:50.549 LIB libspdk_bdev_lvol.a 00:35:50.549 CC module/bdev/aio/bdev_aio_rpc.o 00:35:50.549 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:35:50.549 CC module/bdev/split/vbdev_split_rpc.o 00:35:50.549 LIB libspdk_bdev_zone_block.a 00:35:50.549 CC module/bdev/nvme/nvme_rpc.o 00:35:50.549 CC module/bdev/nvme/bdev_mdns_client.o 00:35:50.549 CC module/bdev/ftl/bdev_ftl_rpc.o 00:35:50.549 LIB libspdk_bdev_aio.a 00:35:50.549 CC module/bdev/nvme/vbdev_opal.o 00:35:50.549 LIB libspdk_bdev_split.a 00:35:50.549 CC module/bdev/nvme/vbdev_opal_rpc.o 00:35:50.549 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:35:50.808 LIB libspdk_bdev_iscsi.a 00:35:50.808 CC module/bdev/raid/bdev_raid_rpc.o 00:35:50.808 CC module/bdev/raid/bdev_raid_sb.o 00:35:50.808 CC module/bdev/raid/raid0.o 00:35:50.808 LIB libspdk_bdev_ftl.a 00:35:50.808 CC module/bdev/raid/raid1.o 00:35:50.808 CC module/bdev/raid/concat.o 00:35:50.808 CC module/bdev/virtio/bdev_virtio_scsi.o 00:35:50.808 CC module/bdev/virtio/bdev_virtio_blk.o 00:35:50.808 CC module/bdev/raid/raid5f.o 00:35:50.808 CC module/bdev/virtio/bdev_virtio_rpc.o 00:35:51.066 LIB libspdk_bdev_raid.a 00:35:51.066 LIB libspdk_bdev_virtio.a 00:35:51.334 LIB libspdk_bdev_nvme.a 00:35:51.334 CC module/event/subsystems/vmd/vmd.o 00:35:51.334 CC module/event/subsystems/vmd/vmd_rpc.o 00:35:51.334 CC module/event/subsystems/sock/sock.o 00:35:51.334 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:35:51.334 CC module/event/subsystems/iobuf/iobuf.o 00:35:51.334 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:35:51.334 CC module/event/subsystems/scheduler/scheduler.o 00:35:51.614 LIB libspdk_event_vhost_blk.a 00:35:51.614 LIB libspdk_event_vmd.a 00:35:51.614 LIB libspdk_event_sock.a 00:35:51.614 LIB libspdk_event_scheduler.a 00:35:51.614 LIB libspdk_event_iobuf.a 00:35:51.614 CC module/event/subsystems/accel/accel.o 00:35:51.873 LIB libspdk_event_accel.a 00:35:51.873 CC module/event/subsystems/bdev/bdev.o 00:35:52.132 LIB libspdk_event_bdev.a 00:35:52.390 CC module/event/subsystems/nbd/nbd.o 00:35:52.390 CC module/event/subsystems/scsi/scsi.o 00:35:52.390 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:35:52.390 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:35:52.390 LIB libspdk_event_nbd.a 00:35:52.390 LIB libspdk_event_scsi.a 00:35:52.390 LIB libspdk_event_nvmf.a 00:35:52.649 CC module/event/subsystems/iscsi/iscsi.o 00:35:52.649 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:35:52.649 LIB libspdk_event_vhost_scsi.a 00:35:52.649 LIB libspdk_event_iscsi.a 00:35:52.907 CXX app/trace/trace.o 00:35:52.907 CC examples/sock/hello_world/hello_sock.o 00:35:52.907 CC examples/vmd/lsvmd/lsvmd.o 00:35:52.907 CC examples/accel/perf/accel_perf.o 00:35:52.907 CC examples/nvme/hello_world/hello_world.o 00:35:52.907 CC examples/ioat/perf/perf.o 00:35:52.907 CC test/accel/dif/dif.o 00:35:52.907 CC examples/bdev/hello_world/hello_bdev.o 00:35:52.907 CC examples/nvmf/nvmf/nvmf.o 00:35:52.907 CC examples/blob/hello_world/hello_blob.o 00:35:53.165 LINK lsvmd 00:35:53.166 LINK ioat_perf 00:35:53.166 LINK hello_world 00:35:53.166 LINK hello_sock 00:35:53.166 LINK hello_bdev 00:35:53.166 LINK hello_blob 00:35:53.166 LINK dif 00:35:53.166 LINK nvmf 00:35:53.424 LINK accel_perf 00:35:53.424 LINK spdk_trace 00:36:05.625 CC app/trace_record/trace_record.o 00:36:05.625 LINK spdk_trace_record 00:36:23.697 CC examples/ioat/verify/verify.o 00:36:24.264 LINK verify 00:36:42.349 CC app/nvmf_tgt/nvmf_main.o 00:36:42.349 CC examples/nvme/reconnect/reconnect.o 00:36:42.349 LINK nvmf_tgt 00:36:43.283 LINK reconnect 00:36:44.219 CC examples/vmd/led/led.o 00:36:45.153 LINK led 00:36:46.086 CC app/iscsi_tgt/iscsi_tgt.o 00:36:46.342 CC app/spdk_tgt/spdk_tgt.o 00:36:47.275 LINK iscsi_tgt 00:36:47.275 LINK spdk_tgt 00:36:57.288 CC app/spdk_lspci/spdk_lspci.o 00:36:58.223 LINK spdk_lspci 00:37:44.930 CC app/spdk_nvme_perf/perf.o 00:37:46.832 CC examples/nvme/nvme_manage/nvme_manage.o 00:37:50.117 LINK spdk_nvme_perf 00:37:50.117 LINK nvme_manage 00:38:08.198 CC examples/nvme/arbitration/arbitration.o 00:38:09.130 LINK arbitration 00:38:47.834 CC examples/blob/cli/blobcli.o 00:38:47.834 CC examples/bdev/bdevperf/bdevperf.o 00:38:48.093 LINK blobcli 00:38:50.626 CC test/app/bdev_svc/bdev_svc.o 00:38:51.562 LINK bdevperf 00:38:51.820 LINK bdev_svc 00:39:04.025 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:39:04.025 LINK nvme_fuzz 00:39:04.592 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:39:04.850 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:39:05.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:39:07.322 LINK vhost_fuzz 00:39:09.854 LINK iscsi_fuzz 00:39:17.979 CC examples/nvme/hotplug/hotplug.o 00:39:19.879 LINK hotplug 00:40:16.093 CC examples/nvme/cmb_copy/cmb_copy.o 00:40:16.093 LINK cmb_copy 00:40:16.661 CC test/bdev/bdevio/bdevio.o 00:40:17.597 CC test/blobfs/mkfs/mkfs.o 00:40:17.597 LINK bdevio 00:40:18.535 LINK mkfs 00:40:19.911 TEST_HEADER include/spdk/config.h 00:40:19.911 CXX test/cpp_headers/accel_module.o 00:40:20.852 CXX test/cpp_headers/bit_pool.o 00:40:21.787 CXX test/cpp_headers/ioat.o 00:40:21.787 CXX test/cpp_headers/blobfs.o 00:40:23.162 CXX test/cpp_headers/notify.o 00:40:23.162 CC examples/nvme/abort/abort.o 00:40:24.099 CXX test/cpp_headers/pipe.o 00:40:25.034 CXX test/cpp_headers/accel.o 00:40:25.034 LINK abort 00:40:25.971 CXX test/cpp_headers/file.o 00:40:26.906 CXX test/cpp_headers/version.o 00:40:27.164 CXX test/cpp_headers/trace_parser.o 00:40:27.730 CXX test/cpp_headers/opal_spec.o 00:40:28.663 CXX test/cpp_headers/uuid.o 00:40:29.229 CC examples/util/zipf/zipf.o 00:40:29.487 CXX test/cpp_headers/likely.o 00:40:30.070 LINK zipf 00:40:30.636 CXX test/cpp_headers/dif.o 00:40:31.572 CXX test/cpp_headers/memory.o 00:40:32.506 CXX test/cpp_headers/vfio_user_pci.o 00:40:33.879 CXX test/cpp_headers/dma.o 00:40:35.250 CXX test/cpp_headers/nbd.o 00:40:35.508 CXX test/cpp_headers/conf.o 00:40:36.442 CXX test/cpp_headers/env_dpdk.o 00:40:38.344 CXX test/cpp_headers/nvmf_spec.o 00:40:39.729 CXX test/cpp_headers/iscsi_spec.o 00:40:41.105 CXX test/cpp_headers/mmio.o 00:40:42.481 CXX test/cpp_headers/json.o 00:40:43.856 CXX test/cpp_headers/opal.o 00:40:45.757 CXX test/cpp_headers/bdev.o 00:40:47.136 CXX test/cpp_headers/base64.o 00:40:48.512 CXX test/cpp_headers/blobfs_bdev.o 00:40:50.414 CXX test/cpp_headers/nvme_ocssd.o 00:40:52.315 CXX test/cpp_headers/fd.o 00:40:53.703 CXX test/cpp_headers/barrier.o 00:40:55.102 CXX test/cpp_headers/scsi_spec.o 00:40:55.102 CC app/spdk_nvme_identify/identify.o 00:40:57.003 CXX test/cpp_headers/zipf.o 00:40:57.938 CXX test/cpp_headers/nvmf.o 00:40:59.839 CXX test/cpp_headers/queue.o 00:40:59.839 LINK spdk_nvme_identify 00:40:59.839 CXX test/cpp_headers/xor.o 00:41:01.214 CXX test/cpp_headers/cpuset.o 00:41:02.591 CXX test/cpp_headers/thread.o 00:41:02.849 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:41:03.786 CXX test/cpp_headers/bdev_zone.o 00:41:04.044 LINK pmr_persistence 00:41:05.419 CXX test/cpp_headers/fd_group.o 00:41:06.353 CXX test/cpp_headers/tree.o 00:41:06.611 CXX test/cpp_headers/blob_bdev.o 00:41:07.984 CXX test/cpp_headers/crc64.o 00:41:08.918 CXX test/cpp_headers/assert.o 00:41:09.516 CC test/app/histogram_perf/histogram_perf.o 00:41:10.082 CXX test/cpp_headers/nvme_spec.o 00:41:10.339 LINK histogram_perf 00:41:11.714 CXX test/cpp_headers/endian.o 00:41:13.088 CXX test/cpp_headers/pci_ids.o 00:41:14.462 CXX test/cpp_headers/log.o 00:41:15.836 CXX test/cpp_headers/nvme_ocssd_spec.o 00:41:17.737 CXX test/cpp_headers/ftl.o 00:41:19.115 CXX test/cpp_headers/config.o 00:41:19.373 CXX test/cpp_headers/vhost.o 00:41:21.274 CXX test/cpp_headers/bdev_module.o 00:41:22.650 CXX test/cpp_headers/nvme_intel.o 00:41:24.024 CXX test/cpp_headers/idxd_spec.o 00:41:24.958 CXX test/cpp_headers/crc16.o 00:41:26.335 CXX test/cpp_headers/nvme.o 00:41:28.239 CXX test/cpp_headers/stdinc.o 00:41:29.618 CXX test/cpp_headers/scsi.o 00:41:30.995 CXX test/cpp_headers/nvmf_fc_spec.o 00:41:30.995 CXX test/cpp_headers/idxd.o 00:41:32.370 CXX test/cpp_headers/hexlify.o 00:41:33.763 CC examples/thread/thread/thread_ex.o 00:41:33.763 CXX test/cpp_headers/reduce.o 00:41:35.146 LINK thread 00:41:35.146 CXX test/cpp_headers/crc32.o 00:41:36.081 CXX test/cpp_headers/init.o 00:41:37.014 CXX test/cpp_headers/nvmf_transport.o 00:41:38.387 CXX test/cpp_headers/nvme_zns.o 00:41:38.645 CXX test/cpp_headers/vfio_user_spec.o 00:41:39.579 CXX test/cpp_headers/util.o 00:41:39.838 CC test/app/jsoncat/jsoncat.o 00:41:40.405 CXX test/cpp_headers/jsonrpc.o 00:41:40.663 LINK jsoncat 00:41:41.229 CXX test/cpp_headers/env.o 00:41:41.848 CC examples/idxd/perf/perf.o 00:41:42.106 CXX test/cpp_headers/nvmf_cmd.o 00:41:43.040 CC examples/interrupt_tgt/interrupt_tgt.o 00:41:43.040 LINK idxd_perf 00:41:43.040 CXX test/cpp_headers/lvol.o 00:41:43.606 LINK interrupt_tgt 00:41:43.863 CXX test/cpp_headers/histogram_data.o 00:41:44.122 CC test/app/stub/stub.o 00:41:45.060 LINK stub 00:41:45.060 CXX test/cpp_headers/event.o 00:41:46.437 CXX test/cpp_headers/trace.o 00:41:47.378 CXX test/cpp_headers/ioat_spec.o 00:41:48.755 CXX test/cpp_headers/string.o 00:41:49.322 CXX test/cpp_headers/ublk.o 00:41:50.698 CXX test/cpp_headers/bit_array.o 00:41:51.634 CXX test/cpp_headers/scheduler.o 00:41:52.569 CXX test/cpp_headers/blob.o 00:41:53.504 CXX test/cpp_headers/gpt_spec.o 00:41:54.438 CXX test/cpp_headers/sock.o 00:41:55.813 CXX test/cpp_headers/vmd.o 00:41:56.748 CXX test/cpp_headers/rpc.o 00:41:57.319 CC app/spdk_nvme_discover/discovery_aer.o 00:41:58.704 LINK spdk_nvme_discover 00:41:58.704 CC test/dma/test_dma/test_dma.o 00:41:59.271 CC app/spdk_top/spdk_top.o 00:42:01.175 LINK test_dma 00:42:03.079 LINK spdk_top 00:42:07.264 CC test/env/mem_callbacks/mem_callbacks.o 00:42:10.547 LINK mem_callbacks 00:42:22.757 CC test/env/vtophys/vtophys.o 00:42:24.132 LINK vtophys 00:42:39.039 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:42:39.039 LINK env_dpdk_post_init 00:42:57.126 CC app/vhost/vhost.o 00:42:57.126 CC app/spdk_dd/spdk_dd.o 00:42:57.126 LINK vhost 00:42:57.385 LINK spdk_dd 00:42:58.320 CC app/fio/nvme/fio_plugin.o 00:42:58.320 CC test/event/event_perf/event_perf.o 00:42:59.256 LINK event_perf 00:43:00.632 LINK spdk_nvme 00:43:00.891 CC test/event/reactor/reactor.o 00:43:01.458 CC test/env/memory/memory_ut.o 00:43:01.716 LINK reactor 00:43:05.907 LINK memory_ut 00:43:20.803 CC app/fio/bdev/fio_plugin.o 00:43:22.722 LINK spdk_bdev 00:43:30.842 CC test/env/pci/pci_ut.o 00:43:33.374 LINK pci_ut 00:43:36.666 CC test/event/reactor_perf/reactor_perf.o 00:43:37.233 LINK reactor_perf 00:43:45.348 CC test/event/app_repeat/app_repeat.o 00:43:45.915 LINK app_repeat 00:43:49.230 CC test/event/scheduler/scheduler.o 00:43:51.133 LINK scheduler 00:44:09.217 CC test/lvol/esnap/esnap.o 00:44:19.187 CC test/nvme/aer/aer.o 00:44:19.753 LINK aer 00:44:29.750 LINK esnap 00:44:32.286 CC test/nvme/reset/reset.o 00:44:34.187 LINK reset 00:45:06.317 CC test/rpc_client/rpc_client_test.o 00:45:06.576 LINK rpc_client_test 00:45:14.686 CC test/nvme/sgl/sgl.o 00:45:15.619 LINK sgl 00:45:20.909 CC test/nvme/e2edp/nvme_dp.o 00:45:22.286 LINK nvme_dp 00:45:37.199 CC test/thread/poller_perf/poller_perf.o 00:45:37.199 LINK poller_perf 00:45:43.802 CC test/thread/lock/spdk_lock.o 00:45:47.994 CC test/nvme/overhead/overhead.o 00:45:49.369 LINK overhead 00:45:49.937 LINK spdk_lock 00:46:08.045 CC test/nvme/err_injection/err_injection.o 00:46:08.045 LINK err_injection 00:46:20.251 CC test/nvme/startup/startup.o 00:46:20.251 LINK startup 00:46:20.251 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:46:20.822 LINK histogram_ut 00:46:27.400 CC test/nvme/reserve/reserve.o 00:46:27.400 CC test/unit/lib/accel/accel.c/accel_ut.o 00:46:27.659 LINK reserve 00:46:32.926 CC test/nvme/simple_copy/simple_copy.o 00:46:33.185 LINK simple_copy 00:46:35.102 LINK accel_ut 00:46:35.669 CC test/nvme/connect_stress/connect_stress.o 00:46:36.604 LINK connect_stress 00:46:51.477 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:46:51.477 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:46:52.044 LINK blob_bdev_ut 00:46:53.419 CC test/unit/lib/blob/blob.c/blob_ut.o 00:46:54.794 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:46:55.729 LINK tree_ut 00:46:59.919 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:47:01.823 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:47:02.758 LINK blobfs_async_ut 00:47:03.695 LINK bdev_ut 00:47:04.262 LINK blobfs_sync_ut 00:47:07.550 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:47:07.550 CC test/nvme/boot_partition/boot_partition.o 00:47:08.118 LINK blobfs_bdev_ut 00:47:08.376 LINK boot_partition 00:47:10.909 CC test/nvme/compliance/nvme_compliance.o 00:47:11.477 LINK blob_ut 00:47:12.043 LINK nvme_compliance 00:47:12.302 CC test/unit/lib/dma/dma.c/dma_ut.o 00:47:13.679 LINK dma_ut 00:47:13.679 CC test/nvme/fused_ordering/fused_ordering.o 00:47:14.613 LINK fused_ordering 00:47:15.992 CC test/unit/lib/event/app.c/app_ut.o 00:47:16.262 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:47:17.692 LINK app_ut 00:47:17.950 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:47:17.950 LINK reactor_ut 00:47:19.325 LINK ioat_ut 00:47:20.700 CC test/nvme/doorbell_aers/doorbell_aers.o 00:47:21.636 LINK doorbell_aers 00:47:23.013 CC test/nvme/fdp/fdp.o 00:47:24.388 LINK fdp 00:47:25.324 CC test/nvme/cuse/cuse.o 00:47:28.644 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:47:28.903 LINK cuse 00:47:29.471 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:47:32.005 LINK conn_ut 00:47:35.293 LINK json_parse_ut 00:47:39.484 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:47:40.863 LINK jsonrpc_server_ut 00:47:45.052 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:47:46.430 LINK init_grp_ut 00:47:49.814 CC test/unit/lib/bdev/part.c/part_ut.o 00:47:50.073 CC test/unit/lib/log/log.c/log_ut.o 00:47:50.639 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:47:50.639 LINK log_ut 00:47:50.897 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:47:53.431 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:47:53.431 CC test/unit/lib/iscsi/param.c/param_ut.o 00:47:54.002 LINK lvol_ut 00:47:54.002 LINK iscsi_ut 00:47:54.002 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:47:54.002 LINK param_ut 00:47:54.002 LINK json_util_ut 00:47:54.261 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:47:54.830 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:47:54.830 LINK portal_grp_ut 00:47:55.089 LINK tgt_node_ut 00:47:55.089 LINK part_ut 00:47:56.468 LINK json_write_ut 00:47:57.037 CC test/unit/lib/notify/notify.c/notify_ut.o 00:47:57.605 LINK notify_ut 00:47:58.541 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:47:58.800 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:47:59.378 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:47:59.636 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:47:59.636 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:47:59.636 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:47:59.894 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:47:59.894 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:47:59.894 LINK nvme_ut 00:48:00.153 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:48:00.153 LINK scsi_nvme_ut 00:48:00.411 LINK ctrlr_bdev_ut 00:48:00.411 LINK gpt_ut 00:48:00.411 LINK ctrlr_ut 00:48:00.670 LINK ctrlr_discovery_ut 00:48:00.670 LINK tcp_ut 00:48:00.929 LINK subsystem_ut 00:48:01.496 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:48:01.754 LINK nvme_ctrlr_ut 00:48:03.658 LINK nvmf_ut 00:48:04.223 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:48:05.160 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:48:05.728 LINK dev_ut 00:48:06.296 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:48:06.296 LINK vbdev_lvol_ut 00:48:06.554 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:48:07.930 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:48:07.930 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:48:08.189 LINK bdev_raid_sb_ut 00:48:08.449 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:48:08.449 LINK concat_ut 00:48:08.449 LINK bdev_raid_ut 00:48:08.449 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:48:09.016 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:48:09.016 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:48:09.016 LINK lun_ut 00:48:09.275 LINK nvme_ctrlr_cmd_ut 00:48:09.275 LINK bdev_ut 00:48:09.275 LINK scsi_ut 00:48:09.844 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:48:09.844 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:48:11.221 LINK scsi_bdev_ut 00:48:11.221 CC test/unit/lib/sock/sock.c/sock_ut.o 00:48:11.788 CC test/unit/lib/sock/posix.c/posix_ut.o 00:48:12.753 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:48:12.753 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:48:13.037 LINK rdma_ut 00:48:13.037 LINK sock_ut 00:48:13.037 LINK posix_ut 00:48:13.037 LINK transport_ut 00:48:13.297 LINK raid1_ut 00:48:13.865 LINK nvme_ctrlr_ocssd_cmd_ut 00:48:14.124 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:48:16.657 LINK raid5f_ut 00:48:16.657 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:48:16.657 LINK bdev_zone_ut 00:48:16.916 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:48:18.292 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:48:18.551 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:48:18.810 LINK nvme_ns_ut 00:48:18.810 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:48:18.810 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:48:18.810 LINK vbdev_zone_block_ut 00:48:19.068 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:48:19.326 LINK scsi_pr_ut 00:48:19.893 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:48:20.151 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:48:20.411 LINK nvme_ns_cmd_ut 00:48:20.411 LINK nvme_ns_ocssd_cmd_ut 00:48:20.977 CC test/unit/lib/thread/thread.c/thread_ut.o 00:48:20.977 LINK nvme_poll_group_ut 00:48:21.544 LINK nvme_pcie_ut 00:48:21.544 LINK bdev_nvme_ut 00:48:21.544 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:48:21.544 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:48:21.544 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:48:22.111 LINK thread_ut 00:48:22.369 LINK iobuf_ut 00:48:22.369 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:48:22.369 LINK nvme_quirks_ut 00:48:22.369 LINK nvme_qpair_ut 00:48:23.325 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:48:23.891 CC test/unit/lib/util/base64.c/base64_ut.o 00:48:24.459 LINK base64_ut 00:48:24.459 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:48:25.412 LINK nvme_transport_ut 00:48:25.412 LINK bit_array_ut 00:48:25.986 LINK nvme_tcp_ut 00:48:26.554 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:48:26.554 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:48:26.813 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:48:26.813 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:48:27.072 LINK pci_event_ut 00:48:27.331 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:48:27.331 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:48:27.898 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:48:27.898 LINK nvme_io_msg_ut 00:48:27.898 LINK cpuset_ut 00:48:28.157 LINK nvme_pcie_common_ut 00:48:28.157 LINK nvme_fabric_ut 00:48:28.157 LINK nvme_opal_ut 00:48:28.157 LINK subsystem_ut 00:48:28.416 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:48:28.675 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:48:28.934 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:48:28.934 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:48:28.934 LINK crc16_ut 00:48:29.192 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:48:29.192 LINK crc32_ieee_ut 00:48:29.192 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:48:29.451 LINK rpc_ut 00:48:29.451 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:48:29.451 LINK idxd_user_ut 00:48:29.451 LINK nvme_cuse_ut 00:48:29.451 LINK nvme_rdma_ut 00:48:29.710 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:48:29.710 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:48:29.710 LINK crc32c_ut 00:48:29.969 LINK idxd_ut 00:48:30.536 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:48:30.536 LINK crc64_ut 00:48:30.795 CC test/unit/lib/util/dif.c/dif_ut.o 00:48:30.795 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:48:30.795 CC test/unit/lib/util/iov.c/iov_ut.o 00:48:31.054 CC test/unit/lib/rdma/common.c/common_ut.o 00:48:31.054 LINK ftl_l2p_ut 00:48:31.054 LINK iov_ut 00:48:31.054 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:48:31.054 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:48:31.054 LINK vhost_ut 00:48:31.312 LINK common_ut 00:48:31.312 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:48:31.571 LINK ftl_io_ut 00:48:31.571 LINK dif_ut 00:48:31.829 LINK ftl_band_ut 00:48:31.829 LINK ftl_bitmap_ut 00:48:31.829 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:48:32.088 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:48:32.088 LINK ftl_mempool_ut 00:48:33.023 CC test/unit/lib/util/math.c/math_ut.o 00:48:33.023 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:48:33.023 LINK ftl_mngt_ut 00:48:33.281 LINK math_ut 00:48:33.848 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:48:34.106 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:48:34.106 LINK ftl_sb_ut 00:48:34.365 CC test/unit/lib/util/xor.c/xor_ut.o 00:48:34.365 CC test/unit/lib/util/string.c/string_ut.o 00:48:34.365 LINK pipe_ut 00:48:34.624 LINK string_ut 00:48:34.624 LINK ftl_layout_upgrade_ut 00:48:34.624 LINK xor_ut 00:49:30.858 13:34:47 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:49:30.858 make[1]: Nothing to be done for 'clean'. 00:49:32.762 13:34:51 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:49:32.762 13:34:51 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:49:32.762 13:34:51 -- common/autotest_common.sh@10 -- $ set +x 00:49:33.021 13:34:51 -- spdk/autopackage.sh@48 -- $ timing_finish 00:49:33.021 13:34:51 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:33.021 13:34:51 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:49:33.021 13:34:51 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:49:33.021 + [[ -n 2350 ]] 00:49:33.021 + sudo kill 2350 00:49:33.021 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:49:33.033 [Pipeline] } 00:49:33.055 [Pipeline] // timeout 00:49:33.061 [Pipeline] } 00:49:33.077 [Pipeline] // stage 00:49:33.081 [Pipeline] } 00:49:33.097 [Pipeline] // catchError 00:49:33.105 [Pipeline] stage 00:49:33.107 [Pipeline] { (Stop VM) 00:49:33.119 [Pipeline] sh 00:49:33.395 + vagrant halt 00:49:36.683 ==> default: Halting domain... 00:49:44.846 [Pipeline] sh 00:49:45.125 + vagrant destroy -f 00:49:48.409 ==> default: Removing domain... 00:49:48.984 [Pipeline] sh 00:49:49.256 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest_2/output 00:49:49.264 [Pipeline] } 00:49:49.276 [Pipeline] // stage 00:49:49.281 [Pipeline] } 00:49:49.296 [Pipeline] // dir 00:49:49.301 [Pipeline] } 00:49:49.316 [Pipeline] // wrap 00:49:49.322 [Pipeline] } 00:49:49.334 [Pipeline] // catchError 00:49:49.342 [Pipeline] stage 00:49:49.344 [Pipeline] { (Epilogue) 00:49:49.355 [Pipeline] sh 00:49:49.631 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:04.559 [Pipeline] catchError 00:50:04.561 [Pipeline] { 00:50:04.575 [Pipeline] sh 00:50:04.856 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:04.856 Artifacts sizes are good 00:50:04.865 [Pipeline] } 00:50:04.880 [Pipeline] // catchError 00:50:04.889 [Pipeline] archiveArtifacts 00:50:04.895 Archiving artifacts 00:50:05.209 [Pipeline] cleanWs 00:50:05.219 [WS-CLEANUP] Deleting project workspace... 00:50:05.219 [WS-CLEANUP] Deferred wipeout is used... 00:50:05.224 [WS-CLEANUP] done 00:50:05.226 [Pipeline] } 00:50:05.242 [Pipeline] // stage 00:50:05.247 [Pipeline] } 00:50:05.261 [Pipeline] // node 00:50:05.266 [Pipeline] End of Pipeline 00:50:05.297 Finished: SUCCESS